sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
468735815e909102415484274904b7ceb673f0bd
|
# Dataset of kai/カイ (Pokémon)
This is the dataset of kai/カイ (Pokémon), containing 500 images and their tags.
The core tags of this character are `blonde_hair, hairband, blue_eyes, short_hair, red_hairband, bangs, hair_between_eyes, eyelashes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 645.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kai_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 346.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kai_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1216 | 749.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kai_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 558.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kai_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1216 | 1.07 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kai_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kai_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bracelet, closed_mouth, neck_ring, red_shirt, sash, solo, strapless_shirt, white_background, white_shorts, looking_at_viewer, waist_cape, medium_hair, hand_up, collarbone, simple_background |
| 1 | 8 |  |  |  |  |  | 1girl, anklet, bracelet, collar, full_body, knees, neck_ring, red_footwear, red_shirt, sash, shoes, solo, strapless_shirt, waist_cape, white_shorts, closed_mouth, standing, looking_at_viewer, medium_hair, simple_background, white_background |
| 2 | 6 |  |  |  |  |  | 1girl, blush, bracelet, collarbone, neck_ring, red_shirt, sash, solo, strapless_shirt, waist_cape, white_shorts, cleavage, looking_at_viewer, medium_hair, closed_mouth, knees, simple_background |
| 3 | 8 |  |  |  |  |  | 1girl, blush, bracelet, neck_ring, open_mouth, sash, strapless_shirt, waist_cape, white_shorts, red_shirt, solo, sweat, collarbone, looking_at_viewer, simple_background, white_background, arm_wrap, hot, hand_up |
| 4 | 24 |  |  |  |  |  | 1girl, neck_ring, strapless_shirt, upper_body, red_shirt, solo, collarbone, closed_mouth, looking_at_viewer, white_background, simple_background, bracelet, sash |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bracelet | closed_mouth | neck_ring | red_shirt | sash | solo | strapless_shirt | white_background | white_shorts | looking_at_viewer | waist_cape | medium_hair | hand_up | collarbone | simple_background | anklet | collar | full_body | knees | red_footwear | shoes | standing | blush | cleavage | open_mouth | sweat | arm_wrap | hot | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:---------------|:------------|:------------|:-------|:-------|:------------------|:-------------------|:---------------|:--------------------|:-------------|:--------------|:----------|:-------------|:--------------------|:---------|:---------|:------------|:--------|:---------------|:--------|:-----------|:--------|:-----------|:-------------|:--------|:-----------|:------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | X | X | X | X | X | X | X | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | X | | X | X | | | | X | | | | X | X | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | | X | X | X | X | X | X | X | X | X | | X | X | X | | | | | | | | X | | X | X | X | X | |
| 4 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | X |
|
CyberHarem/kai_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T08:38:59+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T19:26:35+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kai/カイ (Pokémon)
===========================
This is the dataset of kai/カイ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, hairband, blue\_eyes, short\_hair, red\_hairband, bangs, hair\_between\_eyes, eyelashes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
8e7cb25c10a31b218545d66f76262f1241a1776e
|
# Dataset Card for "Dataset_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kristinashemet/Dataset_V2
|
[
"region:us"
] |
2023-09-11T09:00:29+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10521416, "num_examples": 1573}], "download_size": 1009493, "dataset_size": 10521416}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-08T14:31:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Dataset_V2"
More Information needed
|
[
"# Dataset Card for \"Dataset_V2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Dataset_V2\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Dataset_V2\"\n\nMore Information needed"
] |
afbf3694a0aea5580ecb86514ae8b28b987d70a9
|
# Dataset Card for "1cc7040b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1cc7040b
|
[
"region:us"
] |
2023-09-11T09:33:23+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 178, "num_examples": 10}], "download_size": 1340, "dataset_size": 178}}
|
2023-09-11T09:33:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "1cc7040b"
More Information needed
|
[
"# Dataset Card for \"1cc7040b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"1cc7040b\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"1cc7040b\"\n\nMore Information needed"
] |
e7dd53157642df952b29d6db4907c59004e2dfe6
|
# Dataset of nero_claudius_bride/ネロ・クラウディウス〔ブライド〕/尼禄·克劳狄乌斯〔新娘〕 (Fate/Grand Order)
This is the dataset of nero_claudius_bride/ネロ・クラウディウス〔ブライド〕/尼禄·克劳狄乌斯〔新娘〕 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `blonde_hair, ahoge, green_eyes, breasts, hair_intakes, bangs, large_breasts, long_hair, hair_between_eyes, braid`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 799.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nero_claudius_bride_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 464.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nero_claudius_bride_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1284 | 967.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nero_claudius_bride_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 709.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nero_claudius_bride_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1284 | 1.30 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nero_claudius_bride_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nero_claudius_bride_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 16 |  |  |  |  |  | 1girl, bare_shoulders, chain, cleavage, detached_collar, padlock, solo, white_leotard, white_sleeves, bridal_veil, looking_at_viewer, strapless_leotard, white_thighhighs, blush, full-length_zipper, head_wreath, smile, wide_sleeves, zipper_pull_tab, puffy_detached_sleeves, sidelocks, white_gloves, cowboy_shot, garter_straps, highleg_leotard, loose_belt, buckle, closed_mouth, petals, short_hair, showgirl_skirt, white_background, long_sleeves, simple_background, standing, white_flower, sword, aestus_estus, flower_wreath, hair_flower, thighs, armpits, thigh_gap |
| 1 | 5 |  |  |  |  |  | 1girl, bare_shoulders, bridal_veil, chain, cleavage, detached_sleeves, head_wreath, looking_at_viewer, open_mouth, padlock, smile, solo, blush, detached_collar, full-length_zipper, highleg_leotard, thighhighs, white_gloves, white_leotard, white_sleeves, zipper_pull_tab, petals, puffy_sleeves, thighs, wide_sleeves, flower_wreath, loose_belt, white_background |
| 2 | 6 |  |  |  |  |  | 1girl, cleavage, head_wreath, solo, white_gloves, white_leotard, white_sleeves, blush, bridal_veil, chain, detached_sleeves, looking_at_viewer, padlock, petals, smile, white_thighhighs, bare_shoulders, puffy_sleeves, wide_sleeves, full-length_zipper, loose_belt |
| 3 | 9 |  |  |  |  |  | 1girl, detached_sleeves, solo, bare_shoulders, bridal_veil, chain, cleavage, looking_at_viewer, padlock, white_gloves, white_sleeves, zipper, smile, sword, belt, flower, white_thighhighs, short_hair, white_leotard |
| 4 | 7 |  |  |  |  |  | 1girl, aestus_estus, chain, looking_at_viewer, padlock, solo, belt, bridal_veil, medium_breasts, white_bodysuit, white_gloves, zipper, flower, smile, holding_sword, closed_mouth, cowboy_shot |
| 5 | 5 |  |  |  |  |  | 1girl, gloves, medium_breasts, padlock, solo, veil, white_bodysuit, belt, chain, flower, cleavage, center_opening, zipper |
| 6 | 33 |  |  |  |  |  | 1girl, solo, epaulettes, aestus_estus, cleavage, medium_breasts, hair_ribbon, red_dress, see-through, petals, holding_sword, smile, looking_at_viewer |
| 7 | 10 |  |  |  |  |  | 1girl, closed_mouth, epaulettes, hair_ribbon, red_dress, single_hair_bun, solo, aestus_estus, holding_sword, juliet_sleeves, looking_at_viewer, red_ribbon, smile, medium_breasts, simple_background, white_background, wide_sleeves, ass, blush, short_hair, sidelocks, back_cutout, butt_crack, cleavage, see-through, standing |
| 8 | 5 |  |  |  |  |  | 1girl, juliet_sleeves, looking_at_viewer, red_rose, smile, solo, blush, cleavage, closed_mouth, epaulettes, red_dress, rose_petals, sidelocks, single_hair_bun, collarbone, hair_ribbon, holding_flower, medium_breasts, red_ribbon, simple_background, white_background |
| 9 | 6 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, official_alternate_costume, smile, solo, blush, cleavage, closed_mouth, collarbone, hair_ribbon, single_hair_bun, french_braid, medium_breasts, red_dress, red_ribbon, bow, hair_flower, red_flower, simple_background |
| 10 | 9 |  |  |  |  |  | 1girl, closed_mouth, criss-cross_halter, looking_at_viewer, red_bikini, simple_background, solo, striped_bikini, white_background, cleavage, smile, blush, navel, side-tie_bikini_bottom, twintails, bare_shoulders, earrings, sling_bikini_top |
| 11 | 7 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, criss-cross_halter, looking_at_viewer, red_bikini, side-tie_bikini_bottom, simple_background, smile, solo, striped_bikini, blush, earrings, navel, open_mouth, thighs, twintails, white_background, bead_bracelet, sling_bikini_top, armpits, arms_up, collarbone |
| 12 | 8 |  |  |  |  |  | 1girl, aestus_estus, criss-cross_halter, looking_at_viewer, red_bikini, side-tie_bikini_bottom, solo, striped_bikini, cleavage, earrings, holding_sword, navel, :d, bracelet, open_mouth, twintails, water, blush, closed_mouth, petals |
| 13 | 9 |  |  |  |  |  | 1girl, blue_sky, cleavage, criss-cross_halter, day, navel, outdoors, red_bikini, smile, solo, striped_bikini, blush, looking_at_viewer, ocean, side-tie_bikini_bottom, twintails, bare_shoulders, beach, closed_mouth, sling_bikini_top, bead_bracelet, cloud, collarbone, earrings, aestus_estus, sword |
| 14 | 9 |  |  |  |  |  | 1girl, bare_shoulders, bead_bracelet, blue_sky, cleavage, criss-cross_halter, day, looking_at_viewer, red_bikini, side-tie_bikini_bottom, smile, solo, striped_bikini, earrings, gown, navel, outdoors, sling_bikini_top, hair_ribbon, ocean, open_mouth, blush, thighs, collarbone |
| 15 | 5 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, looking_at_viewer, navel, red_panties, smile, solo, underwear_only, collarbone, hair_ribbon, bow, closed_mouth, hair_flower, lace-trimmed_bra, lace-trimmed_panties, lingerie, official_alternate_costume, on_back, plaid_bra, plaid_panties, red_bra, short_hair, arm_up, armpits, babydoll, bed_sheet, blush, bridal_garter, dakimakura_(medium), frills, medium_breasts, pillow, red_rose, rose_petals, thigh_gap, thighhighs |
| 16 | 27 |  |  |  |  |  | 1girl, gym_uniform, looking_at_viewer, short_sleeves, solo, official_alternate_costume, blush, red_buruma, white_shirt, gym_shirt, thighs, french_braid, single_hair_bun, smile, name_tag, open_mouth, red_headband, simple_background, white_background, sidelocks |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | chain | cleavage | detached_collar | padlock | solo | white_leotard | white_sleeves | bridal_veil | looking_at_viewer | strapless_leotard | white_thighhighs | blush | full-length_zipper | head_wreath | smile | wide_sleeves | zipper_pull_tab | puffy_detached_sleeves | sidelocks | white_gloves | cowboy_shot | garter_straps | highleg_leotard | loose_belt | buckle | closed_mouth | petals | short_hair | showgirl_skirt | white_background | long_sleeves | simple_background | standing | white_flower | sword | aestus_estus | flower_wreath | hair_flower | thighs | armpits | thigh_gap | detached_sleeves | open_mouth | thighhighs | puffy_sleeves | zipper | belt | flower | medium_breasts | white_bodysuit | holding_sword | gloves | veil | center_opening | epaulettes | hair_ribbon | red_dress | see-through | single_hair_bun | juliet_sleeves | red_ribbon | ass | back_cutout | butt_crack | red_rose | rose_petals | collarbone | holding_flower | official_alternate_costume | french_braid | bow | red_flower | criss-cross_halter | red_bikini | striped_bikini | navel | side-tie_bikini_bottom | twintails | earrings | sling_bikini_top | bead_bracelet | arms_up | :d | bracelet | water | blue_sky | day | outdoors | ocean | beach | cloud | gown | red_panties | underwear_only | lace-trimmed_bra | lace-trimmed_panties | lingerie | on_back | plaid_bra | plaid_panties | red_bra | arm_up | babydoll | bed_sheet | bridal_garter | dakimakura_(medium) | frills | pillow | gym_uniform | short_sleeves | red_buruma | white_shirt | gym_shirt | name_tag | red_headband |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:--------|:-----------|:------------------|:----------|:-------|:----------------|:----------------|:--------------|:--------------------|:--------------------|:-------------------|:--------|:---------------------|:--------------|:--------|:---------------|:------------------|:-------------------------|:------------|:---------------|:--------------|:----------------|:------------------|:-------------|:---------|:---------------|:---------|:-------------|:-----------------|:-------------------|:---------------|:--------------------|:-----------|:---------------|:--------|:---------------|:----------------|:--------------|:---------|:----------|:------------|:-------------------|:-------------|:-------------|:----------------|:---------|:-------|:---------|:-----------------|:-----------------|:----------------|:---------|:-------|:-----------------|:-------------|:--------------|:------------|:--------------|:------------------|:-----------------|:-------------|:------|:--------------|:-------------|:-----------|:--------------|:-------------|:-----------------|:-----------------------------|:---------------|:------|:-------------|:---------------------|:-------------|:-----------------|:--------|:-------------------------|:------------|:-----------|:-------------------|:----------------|:----------|:-----|:-----------|:--------|:-----------|:------|:-----------|:--------|:--------|:--------|:-------|:--------------|:-----------------|:-------------------|:-----------------------|:-----------|:----------|:------------|:----------------|:----------|:---------|:-----------|:------------|:----------------|:----------------------|:---------|:---------|:--------------|:----------------|:-------------|:--------------|:------------|:-----------|:---------------|
| 0 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | X | X | X | X | X | X | | | X | | | X | X | | | X | | | X | | | | | | | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | | X | X | X | X | X | X | | X | X | X | X | X | X | | | | X | | | | X | | | X | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | X | X | X | | X | X | X | X | X | X | | X | | | | X | | | | | X | | | | | | | | X | | | | | | | X | | | | | | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | | | X | X | | | X | X | | | | | | X | | | | | X | X | | | | | X | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 33 |  |  |  |  |  | X | | | X | | | X | | | | X | | | | | | X | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | X | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 10 |  |  |  |  |  | X | | | X | | | X | | | | X | | | X | | | X | X | | | X | | | | | | | X | | X | | X | | X | X | | | X | | | | | | | | | | | | | X | | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | | X | | | | X | | | X | | | X | | | | X | | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | X | | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | X | | | | | | X | | | | | | X | | | | | | | | | | | X | | | | | | | X | X | | X | | X | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 9 |  |  |  |  |  | X | X | | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 7 |  |  |  |  |  | X | X | | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | | | | | X | | X | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 12 | 8 |  |  |  |  |  | X | | | X | | | X | | | | X | | | X | | | | | | | | | | | | | | X | X | | | | | | | | | X | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 13 | 9 |  |  |  |  |  | X | X | | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 14 | 9 |  |  |  |  |  | X | X | | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | X | X | X | X | X | | X | X | X | | | | | X | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | |
| 15 | 5 |  |  |  |  |  | X | X | | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | X | | X | | | | | | | | | | X | | X | X | | | X | | | | | X | | | | | | | X | | | | | | | | | X | X | X | | X | | X | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 16 | 27 |  |  |  |  |  | X | | | | | | X | | | | X | | | X | | | X | | | | X | | | | | | | | | | | X | | X | | | | | | | X | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/nero_claudius_bride_fgo
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T09:44:00+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-12T12:59:45+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of nero\_claudius\_bride/ネロ・クラウディウス〔ブライド〕/尼禄·克劳狄乌斯〔新娘〕 (Fate/Grand Order)
=================================================================================
This is the dataset of nero\_claudius\_bride/ネロ・クラウディウス〔ブライド〕/尼禄·克劳狄乌斯〔新娘〕 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, ahoge, green\_eyes, breasts, hair\_intakes, bangs, large\_breasts, long\_hair, hair\_between\_eyes, braid', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
3032837e70e3d4f12e0cceb920a7205f7ec10ca5
|
# Dataset Card for "SFconvertbot-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/SFconvertbot-stats
|
[
"region:us"
] |
2023-09-11T09:50:38+00:00
|
{"dataset_info": {"features": [{"name": "createdAt", "dtype": "timestamp[us]"}, {"name": "pr_number", "dtype": "int64"}, {"name": "status", "dtype": "large_string"}, {"name": "repo_id", "dtype": "large_string"}, {"name": "type", "dtype": "large_string"}, {"name": "isPullRequest", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 3431911, "num_examples": 40197}], "download_size": 1412406, "dataset_size": 3431911}}
|
2023-10-13T05:37:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SFconvertbot-stats"
More Information needed
|
[
"# Dataset Card for \"SFconvertbot-stats\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SFconvertbot-stats\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SFconvertbot-stats\"\n\nMore Information needed"
] |
91b02f1abbe52243741a79f2f6f703f33bc7bb65
|
# Dataset of yumemi_riamu/夢見りあむ (THE iDOLM@STER: Cinderella Girls)
This is the dataset of yumemi_riamu/夢見りあむ (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `pink_hair, multicolored_hair, two-tone_hair, blue_hair, bangs, short_hair, pink_eyes, ahoge, breasts, hair_intakes, fang, large_breasts, hat, nurse_cap`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 835.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumemi_riamu_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 405.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumemi_riamu_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1258 | 913.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumemi_riamu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 701.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumemi_riamu_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1258 | 1.42 GiB | [Download](https://huggingface.co/datasets/CyberHarem/yumemi_riamu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yumemi_riamu_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blush, cleavage, frilled_bikini, looking_at_viewer, solo, blue_bikini, hair_flower, outdoors, blue_sky, cloud, day, open_mouth, bare_shoulders, collarbone, nail_polish, bangle, navel, pink_choker, ocean, smile, thighs, beach, shiny, sunlight |
| 1 | 25 |  |  |  |  |  | 1girl, blush, solo, wrist_cuffs, looking_at_viewer, open_mouth, blue_dress, frills, heart_print, white_apron, armband, pink_choker, white_background, simple_background, sparkle_print, wings, index_fingers_together, puffy_short_sleeves, collarbone, upper_body, hands_up |
| 2 | 5 |  |  |  |  |  | 1girl, collarbone, heart_on_chest, looking_at_viewer, open_mouth, pill_earrings, short_sleeves, solo, t-shirt, white_background, white_shirt, bracelet, skeleton_print, upper_body, :d, hair_between_eyes, off_shoulder, simple_background, blush, pink_collar |
| 3 | 10 |  |  |  |  |  | 1girl, blush, heart_on_chest, looking_at_viewer, pill_earrings, short_sleeves, simple_background, solo, t-shirt, white_shirt, bracelet, open_mouth, white_background, collarbone, pink_collar, skeleton_print, heart-shaped_lock |
| 4 | 31 |  |  |  |  |  | 1girl, looking_at_viewer, solo, plaid_dress, cleavage_cutout, gloves, hair_bow, blush, star_hair_ornament, heart_cutout, black_headwear, open_mouth, frills, braid, simple_background, purple_dress, white_background, smile, yellow_bow, holding, neck_ribbon, middle_finger, yellow_ribbon |
| 5 | 5 |  |  |  |  |  | 1girl, flame_print, long_sleeves, nail_polish, o-ring, official_alternate_costume, open_mouth, randoseru, ribbon, solo, teddy_bear, blush, bow, colored_inner_hair, thigh_strap, thighs, black_headwear, choker, heart_print, looking_at_viewer, multicolored_nails, skin_fang, bandaid_on_knee, bear_hair_ornament, bear_print, beret, black_sweater, blue_nails, collarbone, crossed_bandaids, frilled_sailor_collar, harness, heart_hair_ornament, holding, name_tag, pink_bag, pink_socks, simple_background, sleeves_past_wrists, smile, two_side_up, wariza, white_background, white_sailor_collar, x_hair_ornament |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | cleavage | frilled_bikini | looking_at_viewer | solo | blue_bikini | hair_flower | outdoors | blue_sky | cloud | day | open_mouth | bare_shoulders | collarbone | nail_polish | bangle | navel | pink_choker | ocean | smile | thighs | beach | shiny | sunlight | wrist_cuffs | blue_dress | frills | heart_print | white_apron | armband | white_background | simple_background | sparkle_print | wings | index_fingers_together | puffy_short_sleeves | upper_body | hands_up | heart_on_chest | pill_earrings | short_sleeves | t-shirt | white_shirt | bracelet | skeleton_print | :d | hair_between_eyes | off_shoulder | pink_collar | heart-shaped_lock | plaid_dress | cleavage_cutout | gloves | hair_bow | star_hair_ornament | heart_cutout | black_headwear | braid | purple_dress | yellow_bow | holding | neck_ribbon | middle_finger | yellow_ribbon | flame_print | long_sleeves | o-ring | official_alternate_costume | randoseru | ribbon | teddy_bear | bow | colored_inner_hair | thigh_strap | choker | multicolored_nails | skin_fang | bandaid_on_knee | bear_hair_ornament | bear_print | beret | black_sweater | blue_nails | crossed_bandaids | frilled_sailor_collar | harness | heart_hair_ornament | name_tag | pink_bag | pink_socks | sleeves_past_wrists | two_side_up | wariza | white_sailor_collar | x_hair_ornament |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------|:-----------------|:--------------------|:-------|:--------------|:--------------|:-----------|:-----------|:--------|:------|:-------------|:-----------------|:-------------|:--------------|:---------|:--------|:--------------|:--------|:--------|:---------|:--------|:--------|:-----------|:--------------|:-------------|:---------|:--------------|:--------------|:----------|:-------------------|:--------------------|:----------------|:--------|:-------------------------|:----------------------|:-------------|:-----------|:-----------------|:----------------|:----------------|:----------|:--------------|:-----------|:-----------------|:-----|:--------------------|:---------------|:--------------|:--------------------|:--------------|:------------------|:---------|:-----------|:---------------------|:---------------|:-----------------|:--------|:---------------|:-------------|:----------|:--------------|:----------------|:----------------|:--------------|:---------------|:---------|:-----------------------------|:------------|:---------|:-------------|:------|:---------------------|:--------------|:---------|:---------------------|:------------|:------------------|:---------------------|:-------------|:--------|:----------------|:-------------|:-------------------|:------------------------|:----------|:----------------------|:-----------|:-----------|:-------------|:----------------------|:--------------|:---------|:----------------------|:------------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 25 |  |  |  |  |  | X | X | | | X | X | | | | | | | X | | X | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | X | X | | | | | | | X | | X | | | | | | | | | | | | | | | | | X | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | | | X | X | | | | | | | X | | X | | | | | | | | | | | | | | | | | X | X | | | | | | | X | X | X | X | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 31 |  |  |  |  |  | X | X | | | X | X | | | | | | | X | | | | | | | | X | | | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | | X | X | | | | | | | X | | X | X | | | | | X | X | | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/yumemi_riamu_idolmastercinderellagirls
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T09:52:40+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T10:21:44+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yumemi\_riamu/夢見りあむ (THE iDOLM@STER: Cinderella Girls)
=================================================================
This is the dataset of yumemi\_riamu/夢見りあむ (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are 'pink\_hair, multicolored\_hair, two-tone\_hair, blue\_hair, bangs, short\_hair, pink\_eyes, ahoge, breasts, hair\_intakes, fang, large\_breasts, hat, nurse\_cap', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5346565948b1809de748d304b7b70198d71a16c9
|
# AttaQ Dataset Card
The AttaQ red teaming dataset, consisting of 1402 carefully crafted adversarial questions, is designed to evaluate Large Language Models (LLMs) by assessing their tendency to generate harmful or undesirable responses.
It may serve as a benchmark to assess the potential harm of responses produced by LLMs.
The dataset is categorized into seven distinct classes of questions: deception, discrimination, harmful information, substance abuse, sexual content, personally identifiable information (PII), and violence.
Researchers and developers can use this dataset to assess the behavior of LLMs and explore the various factors that influence their responses, ultimately aiming to enhance their harmlessness and ethical usage.
The development of AttaQ involved multiple sources:
1. Extraction: Initially, attacks were methodologically extracted from the hh-rlhf dataset provided by Anthropic. These extracted attacks were then semi-automatically labeled into the seven aforementioned categories.
2. Generation: LLMs were employed to generate a substantial volume of attacks that were similar in nature to the ones extracted from the hh-rlhf dataset.
3. Crawl and Generate: We utilized LLMs in conjunction with the Wikipedia Crimes Page to generate a series of actions associated with each type of crime. These actions were then employed to devise questions that explore ways to engage in unethical or unlawful actions related to those crimes, specifically focusing on strategies to evade detection or consequences.
### Disclaimer:
* The data contains offensive and upsetting content by nature therefore it may not be easy to read. Please read them in accordance with your own personal risk tolerance.
* LLM's response to the AttaQ samples in many cases is harmful and/or violent.
* This dataset is a representative subset of all possible attacks. Many other attacks can cause LLM to answer harmful or inappropriate responses.
* The red-teaming community’s goal is to make models less harmful. We restrict the usage of the dataset to make models less harmful.
### Fields
- `label`: The label of adversarial question
- `input`: adversarial question
- `additional_info`: source of the adversarial question
### Usage Information
The data are described in the [paper](https://arxiv.org/abs/2311.04124): **Unveiling Safety Vulnerabilities of Large Language Models**.
As detailed in the paper, automatically evaluating the potential harm by LLMs requires a harmlessness metric.
The model under test should be prompted by each question in the dataset, and the corresponding responses undergo evaluation using a metric that considers both the input and output.
Our paper utilizes the "OpenAssistant/reward-model-deberta-v3-large-v2" Reward model, though other models such as "sileod/deberta-v3-large-tasksource-rlhf-reward-model" can also be employed.
```python
import pandas as pd
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
from datasets import load_dataset
# Load dataset and initialize models
dataset = load_dataset("ibm/AttaQ")
generator = pipeline("text2text-generation", model="gpt2", tokenizer="gpt2")
reward_model = AutoModelForSequenceClassification.from_pretrained("sileod/deberta-v3-large-tasksource-rlhf-reward-model")
tokenizer_reward = AutoTokenizer.from_pretrained("sileod/deberta-v3-large-tasksource-rlhf-reward-model")
# Create DataFrame and generate responses
df = pd.DataFrame(dataset['train'])
df['output'] = [response['generated_text'] for response in generator(df['input'].tolist(), max_length=50, num_return_sequences=1)]
# Calculate harmlessness for each sample
inputs = tokenizer_reward(df['input'].tolist(), df['output'].tolist(), return_tensors="pt", padding=True, truncation=True)
df['reward'] = reward_model(**inputs).logits.detach().cpu().numpy()
```
<a href="url"><img src="https://cdn-uploads.huggingface.co/production/uploads/64540de813dba495b2e0354e/wOLy8gjqlNXZMlDAl1lJD.png" align="center" height="800" width="800" ></a>
A comparison between several models across various attack domains in the Attaq dataset.
### Citation Information
If this dataset is utilized in your research, kindly cite the following paper:
```
@article{kour2023unveiling,
title={Unveiling Safety Vulnerabilities of Large Language Models},
author={Kour, George and Zalmanovici, Marcel and Zwerdling, Naama and Goldbraich, Esther and Fandina, Ora Nova and Anaby-Tavor, Ateret and Raz, Orna and Farchi, Eitan},
journal={arXiv preprint arXiv:2311.04124},
year={2023}
}
```
|
ibm/AttaQ
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|Anthropic/hh-rlhf",
"language:en",
"license:mit",
"safety",
"harm",
"LLMs",
"Vulnerability",
"red teaming",
"toxicity",
"arxiv:2311.04124",
"region:us"
] |
2023-09-11T10:00:49+00:00
|
{"language": "en", "license": "mit", "multilinguality": "monolingual", "size_categories": ["1K<n<10K"], "source_datasets": "extended|Anthropic/hh-rlhf", "task_categories": ["text-generation", "text2text-generation"], "tags": ["safety", "harm", "LLMs", "Vulnerability", "red teaming", "toxicity"]}
|
2024-01-26T08:16:47+00:00
|
[
"2311.04124"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-text2text-generation #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|Anthropic/hh-rlhf #language-English #license-mit #safety #harm #LLMs #Vulnerability #red teaming #toxicity #arxiv-2311.04124 #region-us
|
# AttaQ Dataset Card
The AttaQ red teaming dataset, consisting of 1402 carefully crafted adversarial questions, is designed to evaluate Large Language Models (LLMs) by assessing their tendency to generate harmful or undesirable responses.
It may serve as a benchmark to assess the potential harm of responses produced by LLMs.
The dataset is categorized into seven distinct classes of questions: deception, discrimination, harmful information, substance abuse, sexual content, personally identifiable information (PII), and violence.
Researchers and developers can use this dataset to assess the behavior of LLMs and explore the various factors that influence their responses, ultimately aiming to enhance their harmlessness and ethical usage.
The development of AttaQ involved multiple sources:
1. Extraction: Initially, attacks were methodologically extracted from the hh-rlhf dataset provided by Anthropic. These extracted attacks were then semi-automatically labeled into the seven aforementioned categories.
2. Generation: LLMs were employed to generate a substantial volume of attacks that were similar in nature to the ones extracted from the hh-rlhf dataset.
3. Crawl and Generate: We utilized LLMs in conjunction with the Wikipedia Crimes Page to generate a series of actions associated with each type of crime. These actions were then employed to devise questions that explore ways to engage in unethical or unlawful actions related to those crimes, specifically focusing on strategies to evade detection or consequences.
### Disclaimer:
* The data contains offensive and upsetting content by nature therefore it may not be easy to read. Please read them in accordance with your own personal risk tolerance.
* LLM's response to the AttaQ samples in many cases is harmful and/or violent.
* This dataset is a representative subset of all possible attacks. Many other attacks can cause LLM to answer harmful or inappropriate responses.
* The red-teaming community’s goal is to make models less harmful. We restrict the usage of the dataset to make models less harmful.
### Fields
- 'label': The label of adversarial question
- 'input': adversarial question
- 'additional_info': source of the adversarial question
### Usage Information
The data are described in the paper: Unveiling Safety Vulnerabilities of Large Language Models.
As detailed in the paper, automatically evaluating the potential harm by LLMs requires a harmlessness metric.
The model under test should be prompted by each question in the dataset, and the corresponding responses undergo evaluation using a metric that considers both the input and output.
Our paper utilizes the "OpenAssistant/reward-model-deberta-v3-large-v2" Reward model, though other models such as "sileod/deberta-v3-large-tasksource-rlhf-reward-model" can also be employed.
<a href="url"><img src="URL align="center" height="800" width="800" ></a>
A comparison between several models across various attack domains in the Attaq dataset.
If this dataset is utilized in your research, kindly cite the following paper:
|
[
"# AttaQ Dataset Card\nThe AttaQ red teaming dataset, consisting of 1402 carefully crafted adversarial questions, is designed to evaluate Large Language Models (LLMs) by assessing their tendency to generate harmful or undesirable responses. \nIt may serve as a benchmark to assess the potential harm of responses produced by LLMs. \nThe dataset is categorized into seven distinct classes of questions: deception, discrimination, harmful information, substance abuse, sexual content, personally identifiable information (PII), and violence.\nResearchers and developers can use this dataset to assess the behavior of LLMs and explore the various factors that influence their responses, ultimately aiming to enhance their harmlessness and ethical usage.\n\nThe development of AttaQ involved multiple sources:\n1. Extraction: Initially, attacks were methodologically extracted from the hh-rlhf dataset provided by Anthropic. These extracted attacks were then semi-automatically labeled into the seven aforementioned categories.\n2. Generation: LLMs were employed to generate a substantial volume of attacks that were similar in nature to the ones extracted from the hh-rlhf dataset.\n3. Crawl and Generate: We utilized LLMs in conjunction with the Wikipedia Crimes Page to generate a series of actions associated with each type of crime. These actions were then employed to devise questions that explore ways to engage in unethical or unlawful actions related to those crimes, specifically focusing on strategies to evade detection or consequences.",
"### Disclaimer: \n* The data contains offensive and upsetting content by nature therefore it may not be easy to read. Please read them in accordance with your own personal risk tolerance.\n* LLM's response to the AttaQ samples in many cases is harmful and/or violent.\n* This dataset is a representative subset of all possible attacks. Many other attacks can cause LLM to answer harmful or inappropriate responses.\n* The red-teaming community’s goal is to make models less harmful. We restrict the usage of the dataset to make models less harmful.",
"### Fields\n- 'label': The label of adversarial question\n- 'input': adversarial question\n- 'additional_info': source of the adversarial question",
"### Usage Information\nThe data are described in the paper: Unveiling Safety Vulnerabilities of Large Language Models. \nAs detailed in the paper, automatically evaluating the potential harm by LLMs requires a harmlessness metric. \nThe model under test should be prompted by each question in the dataset, and the corresponding responses undergo evaluation using a metric that considers both the input and output.\nOur paper utilizes the \"OpenAssistant/reward-model-deberta-v3-large-v2\" Reward model, though other models such as \"sileod/deberta-v3-large-tasksource-rlhf-reward-model\" can also be employed.\n\n\n<a href=\"url\"><img src=\"URL align=\"center\" height=\"800\" width=\"800\" ></a>\nA comparison between several models across various attack domains in the Attaq dataset. \n\n\nIf this dataset is utilized in your research, kindly cite the following paper:"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|Anthropic/hh-rlhf #language-English #license-mit #safety #harm #LLMs #Vulnerability #red teaming #toxicity #arxiv-2311.04124 #region-us \n",
"# AttaQ Dataset Card\nThe AttaQ red teaming dataset, consisting of 1402 carefully crafted adversarial questions, is designed to evaluate Large Language Models (LLMs) by assessing their tendency to generate harmful or undesirable responses. \nIt may serve as a benchmark to assess the potential harm of responses produced by LLMs. \nThe dataset is categorized into seven distinct classes of questions: deception, discrimination, harmful information, substance abuse, sexual content, personally identifiable information (PII), and violence.\nResearchers and developers can use this dataset to assess the behavior of LLMs and explore the various factors that influence their responses, ultimately aiming to enhance their harmlessness and ethical usage.\n\nThe development of AttaQ involved multiple sources:\n1. Extraction: Initially, attacks were methodologically extracted from the hh-rlhf dataset provided by Anthropic. These extracted attacks were then semi-automatically labeled into the seven aforementioned categories.\n2. Generation: LLMs were employed to generate a substantial volume of attacks that were similar in nature to the ones extracted from the hh-rlhf dataset.\n3. Crawl and Generate: We utilized LLMs in conjunction with the Wikipedia Crimes Page to generate a series of actions associated with each type of crime. These actions were then employed to devise questions that explore ways to engage in unethical or unlawful actions related to those crimes, specifically focusing on strategies to evade detection or consequences.",
"### Disclaimer: \n* The data contains offensive and upsetting content by nature therefore it may not be easy to read. Please read them in accordance with your own personal risk tolerance.\n* LLM's response to the AttaQ samples in many cases is harmful and/or violent.\n* This dataset is a representative subset of all possible attacks. Many other attacks can cause LLM to answer harmful or inappropriate responses.\n* The red-teaming community’s goal is to make models less harmful. We restrict the usage of the dataset to make models less harmful.",
"### Fields\n- 'label': The label of adversarial question\n- 'input': adversarial question\n- 'additional_info': source of the adversarial question",
"### Usage Information\nThe data are described in the paper: Unveiling Safety Vulnerabilities of Large Language Models. \nAs detailed in the paper, automatically evaluating the potential harm by LLMs requires a harmlessness metric. \nThe model under test should be prompted by each question in the dataset, and the corresponding responses undergo evaluation using a metric that considers both the input and output.\nOur paper utilizes the \"OpenAssistant/reward-model-deberta-v3-large-v2\" Reward model, though other models such as \"sileod/deberta-v3-large-tasksource-rlhf-reward-model\" can also be employed.\n\n\n<a href=\"url\"><img src=\"URL align=\"center\" height=\"800\" width=\"800\" ></a>\nA comparison between several models across various attack domains in the Attaq dataset. \n\n\nIf this dataset is utilized in your research, kindly cite the following paper:"
] |
[
110,
353,
135,
39,
228
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|Anthropic/hh-rlhf #language-English #license-mit #safety #harm #LLMs #Vulnerability #red teaming #toxicity #arxiv-2311.04124 #region-us \n# AttaQ Dataset Card\nThe AttaQ red teaming dataset, consisting of 1402 carefully crafted adversarial questions, is designed to evaluate Large Language Models (LLMs) by assessing their tendency to generate harmful or undesirable responses. \nIt may serve as a benchmark to assess the potential harm of responses produced by LLMs. \nThe dataset is categorized into seven distinct classes of questions: deception, discrimination, harmful information, substance abuse, sexual content, personally identifiable information (PII), and violence.\nResearchers and developers can use this dataset to assess the behavior of LLMs and explore the various factors that influence their responses, ultimately aiming to enhance their harmlessness and ethical usage.\n\nThe development of AttaQ involved multiple sources:\n1. Extraction: Initially, attacks were methodologically extracted from the hh-rlhf dataset provided by Anthropic. These extracted attacks were then semi-automatically labeled into the seven aforementioned categories.\n2. Generation: LLMs were employed to generate a substantial volume of attacks that were similar in nature to the ones extracted from the hh-rlhf dataset.\n3. Crawl and Generate: We utilized LLMs in conjunction with the Wikipedia Crimes Page to generate a series of actions associated with each type of crime. These actions were then employed to devise questions that explore ways to engage in unethical or unlawful actions related to those crimes, specifically focusing on strategies to evade detection or consequences."
] |
d48a498b2c4d44e91c5d61d25f5df4689c400010
|
# Dataset Card for "viwiki_20230901_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
maitrang/viwiki_20230901_v2
|
[
"region:us"
] |
2023-09-11T10:03:47+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "revid", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2856013624, "num_examples": 3129448}], "download_size": 1203250456, "dataset_size": 2856013624}}
|
2023-09-11T13:12:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "viwiki_20230901_v2"
More Information needed
|
[
"# Dataset Card for \"viwiki_20230901_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"viwiki_20230901_v2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"viwiki_20230901_v2\"\n\nMore Information needed"
] |
76bf0536339c85a8ed24274cd845382843469eb9
|
# Dataset of olive (Pokémon)
This is the dataset of olive (Pokémon), containing 339 images and their tags.
The core tags of this character are `long_hair, blonde_hair, green_eyes, earrings, breasts, hoop_earrings, red_lips, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 339 | 344.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/olive_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 339 | 204.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/olive_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 754 | 398.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/olive_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 339 | 310.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/olive_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 754 | 552.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/olive_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/olive_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_choker, black_pantyhose, black_skirt, blouse, jewelry, lipstick, nail_polish, pencil_skirt, red_nails, red_shirt, simple_background, solo, white_coat, black_footwear, closed_mouth, full_body, grey_pantyhose, half-closed_eyes, sidelocks, white_background, high_heels, legs_apart, looking_at_viewer, sitting, standing |
| 1 | 9 |  |  |  |  |  | 1girl, black_choker, blouse, jewelry, red_nails, red_shirt, solo, white_coat, black_skirt, lipstick, simple_background, closed_mouth, grey_pantyhose, pencil_skirt, collarbone, looking_at_viewer, white_background, nail_polish |
| 2 | 5 |  |  |  |  |  | 1girl, jewelry, lipstick, solo, half-closed_eyes, looking_at_viewer, nail_polish, navel, nipples, red_nails, red_panties, simple_background, white_background, white_coat, artist_name, black_choker, blush, sidelocks, standing |
| 3 | 18 |  |  |  |  |  | 1boy, 1girl, hetero, penis, fellatio, jewelry, lipstick, blush, solo_focus, :>=, dark-skinned_male, looking_at_viewer, uncensored, heart, choker, half-closed_eyes, pov_crotch, saliva, simple_background, sweat |
| 4 | 9 |  |  |  |  |  | 1girl, erection, female_pubic_hair, futanari, jewelry, solo, testicles, uncensored, lipstick, looking_at_viewer, artist_name, black_choker, black_thighhighs, day, large_penis, outdoors, veiny_penis, white_coat, building, shirt, sitting, spread_legs, between_breasts, blue_sky, cloud, parted_lips |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_choker | black_pantyhose | black_skirt | blouse | jewelry | lipstick | nail_polish | pencil_skirt | red_nails | red_shirt | simple_background | solo | white_coat | black_footwear | closed_mouth | full_body | grey_pantyhose | half-closed_eyes | sidelocks | white_background | high_heels | legs_apart | looking_at_viewer | sitting | standing | collarbone | navel | nipples | red_panties | artist_name | blush | 1boy | hetero | penis | fellatio | solo_focus | :>= | dark-skinned_male | uncensored | heart | choker | pov_crotch | saliva | sweat | erection | female_pubic_hair | futanari | testicles | black_thighhighs | day | large_penis | outdoors | veiny_penis | building | shirt | spread_legs | between_breasts | blue_sky | cloud | parted_lips |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:------------------|:--------------|:---------|:----------|:-----------|:--------------|:---------------|:------------|:------------|:--------------------|:-------|:-------------|:-----------------|:---------------|:------------|:-----------------|:-------------------|:------------|:-------------------|:-------------|:-------------|:--------------------|:----------|:-----------|:-------------|:--------|:----------|:--------------|:--------------|:--------|:-------|:---------|:--------|:-----------|:-------------|:------|:--------------------|:-------------|:--------|:---------|:-------------|:---------|:--------|:-----------|:--------------------|:-----------|:------------|:-------------------|:------|:--------------|:-----------|:--------------|:-----------|:--------|:--------------|:------------------|:-----------|:--------|:--------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | | X | X | X | X | X | X | X | X | X | X | X | | X | | X | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | | X | X | X | | X | | X | X | X | | | | | X | X | X | | | X | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | | | | | X | X | | | | | X | | | | | | | X | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | X | | | | X | X | | | | | | X | X | | | | | | | | | | X | X | | | | | | X | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/olive_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T10:06:30+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T18:44:33+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of olive (Pokémon)
==========================
This is the dataset of olive (Pokémon), containing 339 images and their tags.
The core tags of this character are 'long\_hair, blonde\_hair, green\_eyes, earrings, breasts, hoop\_earrings, red\_lips, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
15da6bc0215cdd6be8bfdb44268557d843a9f239
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_zeroshot_sileod"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/shp-generated_flan_t5_large_flan_t5_zeroshot_sileod
|
[
"region:us"
] |
2023-09-11T10:12:15+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_helpfulness", "dtype": "float64"}, {"name": "zeroshot_specificity", "dtype": "float64"}, {"name": "zeroshot_intent", "dtype": "float64"}, {"name": "zeroshot_factuality", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand", "dtype": "int64"}, {"name": "zeroshot_relevance", "dtype": "int64"}, {"name": "zeroshot_readability", "dtype": "int64"}, {"name": "zeroshot_enough-detail", "dtype": "int64"}, {"name": "zeroshot_biased:", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences", "dtype": "float64"}, {"name": "zeroshot_repetetive", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context", "dtype": "float64"}, {"name": "zeroshot_too-long", "dtype": "int64"}, {"name": "external_rm1", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 29701040, "num_examples": 25600}], "download_size": 2057759, "dataset_size": 29701040}}
|
2023-09-11T10:12:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_zeroshot_sileod"
More Information needed
|
[
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_zeroshot_sileod\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_zeroshot_sileod\"\n\nMore Information needed"
] |
[
6,
35
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_zeroshot_sileod\"\n\nMore Information needed"
] |
926f3ed4fb511f5dbdb2ab75879de93ea88a1d55
|
# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_2100
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/_fsx_shared-falcon-180B_2100
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [_fsx_shared-falcon-180B_2100](https://huggingface.co/_fsx_shared-falcon-180B_2100) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details__fsx_shared-falcon-180B_2100",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-11T14:38:41.751680](https://huggingface.co/datasets/open-llm-leaderboard/details__fsx_shared-falcon-180B_2100/blob/main/results_2023-09-11T14-38-41.751680.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7013964110418803,
"acc_stderr": 0.030702382053756392,
"acc_norm": 0.7050725401087311,
"acc_norm_stderr": 0.030672281323978368,
"mc1": 0.3157894736842105,
"mc1_stderr": 0.016272287957916916,
"mc2": 0.4692416686068408,
"mc2_stderr": 0.014108890624515822
},
"harness|arc:challenge|25": {
"acc": 0.6544368600682594,
"acc_stderr": 0.013896938461145677,
"acc_norm": 0.6860068259385665,
"acc_norm_stderr": 0.013562691224726288
},
"harness|hellaswag|10": {
"acc": 0.7061342362079267,
"acc_stderr": 0.004546002255456772,
"acc_norm": 0.8914558852818164,
"acc_norm_stderr": 0.003104306434972476
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7763157894736842,
"acc_stderr": 0.033911609343436046,
"acc_norm": 0.7763157894736842,
"acc_norm_stderr": 0.033911609343436046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768081,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768081
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7471698113207547,
"acc_stderr": 0.026749899771241214,
"acc_norm": 0.7471698113207547,
"acc_norm_stderr": 0.026749899771241214
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.031164899666948607,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.031164899666948607
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.03391750322321659,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.03391750322321659
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082635,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082635
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6595744680851063,
"acc_stderr": 0.030976692998534446,
"acc_norm": 0.6595744680851063,
"acc_norm_stderr": 0.030976692998534446
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6344827586206897,
"acc_stderr": 0.04013124195424386,
"acc_norm": 0.6344827586206897,
"acc_norm_stderr": 0.04013124195424386
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8451612903225807,
"acc_stderr": 0.020579287326583227,
"acc_norm": 0.8451612903225807,
"acc_norm_stderr": 0.020579287326583227
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5615763546798029,
"acc_stderr": 0.03491207857486519,
"acc_norm": 0.5615763546798029,
"acc_norm_stderr": 0.03491207857486519
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8787878787878788,
"acc_stderr": 0.023253157951942084,
"acc_norm": 0.8787878787878788,
"acc_norm_stderr": 0.023253157951942084
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9430051813471503,
"acc_stderr": 0.016731085293607548,
"acc_norm": 0.9430051813471503,
"acc_norm_stderr": 0.016731085293607548
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.022421273612923714,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.022421273612923714
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083025,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083025
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.026841514322958938,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.026841514322958938
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4304635761589404,
"acc_stderr": 0.04042809961395634,
"acc_norm": 0.4304635761589404,
"acc_norm_stderr": 0.04042809961395634
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8935779816513761,
"acc_stderr": 0.013221554674594372,
"acc_norm": 0.8935779816513761,
"acc_norm_stderr": 0.013221554674594372
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6064814814814815,
"acc_stderr": 0.03331747876370312,
"acc_norm": 0.6064814814814815,
"acc_norm_stderr": 0.03331747876370312
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9166666666666666,
"acc_stderr": 0.019398452135813902,
"acc_norm": 0.9166666666666666,
"acc_norm_stderr": 0.019398452135813902
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065494,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065494
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.027584066602208274,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.027584066602208274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8320610687022901,
"acc_stderr": 0.032785485373431386,
"acc_norm": 0.8320610687022901,
"acc_norm_stderr": 0.032785485373431386
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.03680918141673881,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.03680918141673881
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.803680981595092,
"acc_stderr": 0.031207970394709218,
"acc_norm": 0.803680981595092,
"acc_norm_stderr": 0.031207970394709218
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5982142857142857,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.5982142857142857,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9017094017094017,
"acc_stderr": 0.019503444900757567,
"acc_norm": 0.9017094017094017,
"acc_norm_stderr": 0.019503444900757567
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.879948914431673,
"acc_stderr": 0.011622736692041256,
"acc_norm": 0.879948914431673,
"acc_norm_stderr": 0.011622736692041256
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7890173410404624,
"acc_stderr": 0.021966309947043114,
"acc_norm": 0.7890173410404624,
"acc_norm_stderr": 0.021966309947043114
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5072625698324023,
"acc_stderr": 0.0167207374051795,
"acc_norm": 0.5072625698324023,
"acc_norm_stderr": 0.0167207374051795
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7810457516339869,
"acc_stderr": 0.02367908986180772,
"acc_norm": 0.7810457516339869,
"acc_norm_stderr": 0.02367908986180772
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.797427652733119,
"acc_stderr": 0.022827317491059682,
"acc_norm": 0.797427652733119,
"acc_norm_stderr": 0.022827317491059682
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8179012345679012,
"acc_stderr": 0.021473491834808355,
"acc_norm": 0.8179012345679012,
"acc_norm_stderr": 0.021473491834808355
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5567375886524822,
"acc_stderr": 0.029634838473766006,
"acc_norm": 0.5567375886524822,
"acc_norm_stderr": 0.029634838473766006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5501955671447197,
"acc_stderr": 0.01270572149856497,
"acc_norm": 0.5501955671447197,
"acc_norm_stderr": 0.01270572149856497
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7463235294117647,
"acc_stderr": 0.026431329870789534,
"acc_norm": 0.7463235294117647,
"acc_norm_stderr": 0.026431329870789534
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7532679738562091,
"acc_stderr": 0.0174408203674025,
"acc_norm": 0.7532679738562091,
"acc_norm_stderr": 0.0174408203674025
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.040693063197213775,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.040693063197213775
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7673469387755102,
"acc_stderr": 0.027049257915896182,
"acc_norm": 0.7673469387755102,
"acc_norm_stderr": 0.027049257915896182
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.900497512437811,
"acc_stderr": 0.021166216304659386,
"acc_norm": 0.900497512437811,
"acc_norm_stderr": 0.021166216304659386
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8538011695906432,
"acc_stderr": 0.027097290118070813,
"acc_norm": 0.8538011695906432,
"acc_norm_stderr": 0.027097290118070813
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3157894736842105,
"mc1_stderr": 0.016272287957916916,
"mc2": 0.4692416686068408,
"mc2_stderr": 0.014108890624515822
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details__fsx_shared-falcon-180B_2100
|
[
"region:us"
] |
2023-09-11T10:31:21+00:00
|
{"pretty_name": "Evaluation run of _fsx_shared-falcon-180B_2100", "dataset_summary": "Dataset automatically created during the evaluation run of model [_fsx_shared-falcon-180B_2100](https://huggingface.co/_fsx_shared-falcon-180B_2100) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details__fsx_shared-falcon-180B_2100\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-11T14:38:41.751680](https://huggingface.co/datasets/open-llm-leaderboard/details__fsx_shared-falcon-180B_2100/blob/main/results_2023-09-11T14-38-41.751680.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7013964110418803,\n \"acc_stderr\": 0.030702382053756392,\n \"acc_norm\": 0.7050725401087311,\n \"acc_norm_stderr\": 0.030672281323978368,\n \"mc1\": 0.3157894736842105,\n \"mc1_stderr\": 0.016272287957916916,\n \"mc2\": 0.4692416686068408,\n \"mc2_stderr\": 0.014108890624515822\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6544368600682594,\n \"acc_stderr\": 0.013896938461145677,\n \"acc_norm\": 0.6860068259385665,\n \"acc_norm_stderr\": 0.013562691224726288\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7061342362079267,\n \"acc_stderr\": 0.004546002255456772,\n \"acc_norm\": 0.8914558852818164,\n \"acc_norm_stderr\": 0.003104306434972476\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7763157894736842,\n \"acc_stderr\": 0.033911609343436046,\n \"acc_norm\": 0.7763157894736842,\n \"acc_norm_stderr\": 0.033911609343436046\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768081,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768081\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7471698113207547,\n \"acc_stderr\": 0.026749899771241214,\n \"acc_norm\": 0.7471698113207547,\n \"acc_norm_stderr\": 0.026749899771241214\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.031164899666948607,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.031164899666948607\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.03391750322321659,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.03391750322321659\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6595744680851063,\n \"acc_stderr\": 0.030976692998534446,\n \"acc_norm\": 0.6595744680851063,\n \"acc_norm_stderr\": 0.030976692998534446\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.04013124195424386,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.04013124195424386\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8451612903225807,\n \"acc_stderr\": 0.020579287326583227,\n \"acc_norm\": 0.8451612903225807,\n \"acc_norm_stderr\": 0.020579287326583227\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5615763546798029,\n \"acc_stderr\": 0.03491207857486519,\n \"acc_norm\": 0.5615763546798029,\n \"acc_norm_stderr\": 0.03491207857486519\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8787878787878788,\n \"acc_stderr\": 0.023253157951942084,\n \"acc_norm\": 0.8787878787878788,\n \"acc_norm_stderr\": 0.023253157951942084\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.016731085293607548,\n \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.016731085293607548\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.022421273612923714,\n \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.022421273612923714\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083025,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083025\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.026841514322958938,\n \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.026841514322958938\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4304635761589404,\n \"acc_stderr\": 0.04042809961395634,\n \"acc_norm\": 0.4304635761589404,\n \"acc_norm_stderr\": 0.04042809961395634\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8935779816513761,\n \"acc_stderr\": 0.013221554674594372,\n \"acc_norm\": 0.8935779816513761,\n \"acc_norm_stderr\": 0.013221554674594372\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9166666666666666,\n \"acc_stderr\": 0.019398452135813902,\n \"acc_norm\": 0.9166666666666666,\n \"acc_norm_stderr\": 0.019398452135813902\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065494,\n \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065494\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8320610687022901,\n \"acc_stderr\": 0.032785485373431386,\n \"acc_norm\": 0.8320610687022901,\n \"acc_norm_stderr\": 0.032785485373431386\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.03680918141673881,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.03680918141673881\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709218,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709218\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5982142857142857,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.5982142857142857,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573974,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573974\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9017094017094017,\n \"acc_stderr\": 0.019503444900757567,\n \"acc_norm\": 0.9017094017094017,\n \"acc_norm_stderr\": 0.019503444900757567\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.879948914431673,\n \"acc_stderr\": 0.011622736692041256,\n \"acc_norm\": 0.879948914431673,\n \"acc_norm_stderr\": 0.011622736692041256\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7890173410404624,\n \"acc_stderr\": 0.021966309947043114,\n \"acc_norm\": 0.7890173410404624,\n \"acc_norm_stderr\": 0.021966309947043114\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5072625698324023,\n \"acc_stderr\": 0.0167207374051795,\n \"acc_norm\": 0.5072625698324023,\n \"acc_norm_stderr\": 0.0167207374051795\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7810457516339869,\n \"acc_stderr\": 0.02367908986180772,\n \"acc_norm\": 0.7810457516339869,\n \"acc_norm_stderr\": 0.02367908986180772\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.797427652733119,\n \"acc_stderr\": 0.022827317491059682,\n \"acc_norm\": 0.797427652733119,\n \"acc_norm_stderr\": 0.022827317491059682\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8179012345679012,\n \"acc_stderr\": 0.021473491834808355,\n \"acc_norm\": 0.8179012345679012,\n \"acc_norm_stderr\": 0.021473491834808355\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5567375886524822,\n \"acc_stderr\": 0.029634838473766006,\n \"acc_norm\": 0.5567375886524822,\n \"acc_norm_stderr\": 0.029634838473766006\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5501955671447197,\n \"acc_stderr\": 0.01270572149856497,\n \"acc_norm\": 0.5501955671447197,\n \"acc_norm_stderr\": 0.01270572149856497\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7463235294117647,\n \"acc_stderr\": 0.026431329870789534,\n \"acc_norm\": 0.7463235294117647,\n \"acc_norm_stderr\": 0.026431329870789534\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7532679738562091,\n \"acc_stderr\": 0.0174408203674025,\n \"acc_norm\": 0.7532679738562091,\n \"acc_norm_stderr\": 0.0174408203674025\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.040693063197213775,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.040693063197213775\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7673469387755102,\n \"acc_stderr\": 0.027049257915896182,\n \"acc_norm\": 0.7673469387755102,\n \"acc_norm_stderr\": 0.027049257915896182\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n \"acc_stderr\": 0.021166216304659386,\n \"acc_norm\": 0.900497512437811,\n \"acc_norm_stderr\": 0.021166216304659386\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070813,\n \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070813\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3157894736842105,\n \"mc1_stderr\": 0.016272287957916916,\n \"mc2\": 0.4692416686068408,\n \"mc2_stderr\": 0.014108890624515822\n }\n}\n```", "repo_url": "https://huggingface.co/_fsx_shared-falcon-180B_2100", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|arc:challenge|25_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hellaswag|10_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T14-38-41.751680.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T11_31_21.074717", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T11-31-21.074717.parquet"]}, {"split": "2023_09_11T14_38_41.751680", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T14-38-41.751680.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T11_31_21.074717", "path": ["results_2023-09-11T11-31-21.074717.parquet"]}, {"split": "2023_09_11T14_38_41.751680", "path": ["results_2023-09-11T14-38-41.751680.parquet"]}, {"split": "latest", "path": ["results_2023-09-11T14-38-41.751680.parquet"]}]}]}
|
2023-09-11T13:39:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_2100
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_2100 on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-11T14:38:41.751680(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_2100",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_2100 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-11T14:38:41.751680(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_2100",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_2100 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-11T14:38:41.751680(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_2100## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_2100 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-11T14:38:41.751680(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
519ca75371ffe3283b17063c9b1aa9d0c035f2ce
|
# Dataset Card for Evaluation run of uukuguy/speechless-codellama-dolphin-orca-platypus-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-codellama-dolphin-orca-platypus-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-codellama-dolphin-orca-platypus-13b](https://huggingface.co/uukuguy/speechless-codellama-dolphin-orca-platypus-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-codellama-dolphin-orca-platypus-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T11:16:46.322538](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-codellama-dolphin-orca-platypus-13b/blob/main/results_2023-10-23T11-16-46.322538.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2627936241610738,
"em_stderr": 0.004507560917898856,
"f1": 0.30675125838926204,
"f1_stderr": 0.004485653251386068,
"acc": 0.3824122338022455,
"acc_stderr": 0.010659520829079217
},
"harness|drop|3": {
"em": 0.2627936241610738,
"em_stderr": 0.004507560917898856,
"f1": 0.30675125838926204,
"f1_stderr": 0.004485653251386068
},
"harness|gsm8k|5": {
"acc": 0.09552691432903715,
"acc_stderr": 0.008096605771155735
},
"harness|winogrande|5": {
"acc": 0.6692975532754538,
"acc_stderr": 0.0132224358870027
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_uukuguy__speechless-codellama-dolphin-orca-platypus-13b
|
[
"region:us"
] |
2023-09-11T10:46:21+00:00
|
{"pretty_name": "Evaluation run of uukuguy/speechless-codellama-dolphin-orca-platypus-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-codellama-dolphin-orca-platypus-13b](https://huggingface.co/uukuguy/speechless-codellama-dolphin-orca-platypus-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-codellama-dolphin-orca-platypus-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T11:16:46.322538](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-codellama-dolphin-orca-platypus-13b/blob/main/results_2023-10-23T11-16-46.322538.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2627936241610738,\n \"em_stderr\": 0.004507560917898856,\n \"f1\": 0.30675125838926204,\n \"f1_stderr\": 0.004485653251386068,\n \"acc\": 0.3824122338022455,\n \"acc_stderr\": 0.010659520829079217\n },\n \"harness|drop|3\": {\n \"em\": 0.2627936241610738,\n \"em_stderr\": 0.004507560917898856,\n \"f1\": 0.30675125838926204,\n \"f1_stderr\": 0.004485653251386068\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09552691432903715,\n \"acc_stderr\": 0.008096605771155735\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6692975532754538,\n \"acc_stderr\": 0.0132224358870027\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-codellama-dolphin-orca-platypus-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|arc:challenge|25_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T11_16_46.322538", "path": ["**/details_harness|drop|3_2023-10-23T11-16-46.322538.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T11-16-46.322538.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T11_16_46.322538", "path": ["**/details_harness|gsm8k|5_2023-10-23T11-16-46.322538.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T11-16-46.322538.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hellaswag|10_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T11-46-04.714895.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T11-46-04.714895.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T11-46-04.714895.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T11_16_46.322538", "path": ["**/details_harness|winogrande|5_2023-10-23T11-16-46.322538.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T11-16-46.322538.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T11_46_04.714895", "path": ["results_2023-09-11T11-46-04.714895.parquet"]}, {"split": "2023_10_23T11_16_46.322538", "path": ["results_2023-10-23T11-16-46.322538.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T11-16-46.322538.parquet"]}]}]}
|
2023-10-23T10:16:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-codellama-dolphin-orca-platypus-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-codellama-dolphin-orca-platypus-13b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T11:16:46.322538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of uukuguy/speechless-codellama-dolphin-orca-platypus-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-codellama-dolphin-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T11:16:46.322538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-codellama-dolphin-orca-platypus-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-codellama-dolphin-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T11:16:46.322538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
34,
31,
182,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-codellama-dolphin-orca-platypus-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-codellama-dolphin-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T11:16:46.322538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fae8fe9f99a9ac8626bf9ac6bf709cd7958c0bdc
|
Spotify genres scraped from https://everynoise.com/everynoise1d.cgi?scope=all
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: genre_name
dtype: string
- name: genre_slug
dtype: string
- name: playlist_url
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 1047789
num_examples: 6276
download_size: 577290
dataset_size: 1047789
---
# Dataset Card for "spotify-genres"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
devxpy/spotify-genres
|
[
"region:us"
] |
2023-09-11T10:50:49+00:00
|
{}
|
2023-09-11T10:54:43+00:00
|
[] |
[] |
TAGS
#region-us
|
Spotify genres scraped from URL
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: genre_name
dtype: string
- name: genre_slug
dtype: string
- name: playlist_url
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 1047789
num_examples: 6276
download_size: 577290
dataset_size: 1047789
---
# Dataset Card for "spotify-genres"
More Information needed
|
[
"# Dataset Card for \"spotify-genres\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spotify-genres\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spotify-genres\"\n\nMore Information needed"
] |
2a44562a9c1e4d9a161536261d7e1b0bbf2627d1
|
# Dataset Card for "dreambooth-duc-ba-cathedral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bellagio-ai/dreambooth-duc-ba-cathedral
|
[
"region:us"
] |
2023-09-11T11:01:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2382062.0, "num_examples": 5}], "download_size": 2383242, "dataset_size": 2382062.0}}
|
2023-09-11T11:02:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dreambooth-duc-ba-cathedral"
More Information needed
|
[
"# Dataset Card for \"dreambooth-duc-ba-cathedral\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-duc-ba-cathedral\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dreambooth-duc-ba-cathedral\"\n\nMore Information needed"
] |
d2b082e79dee36dee58a09c10a20f980b75019dd
|
# KFUPM Handwritten Arabic TexT (KHATT) database
### Version 1.0 (September 2012 Release)
The database contains handwritten Arabic text images and its ground-truth developed for
research in the area of Arabic handwritten text. It contains the line images and their ground-truth. It was used for the pilot experimentation as reported in the paper: <ins> S. A. Mahmoud, I. Ahmad, M. Alshayeb, W. G. Al-Khatib, M. T. Parvez, G. A. Fink, V. Margner, and H. EL Abed, “KHATT: Arabic Offline
Handwritten Text Database </ins>, In Proceedings of the 13th International Conference on Frontiers in Handwriting Recognition (ICFHR 2012), Bari, Italy, 2012, pp. 447-452, IEEE Computer Society.
|
benhachem/KHATT
|
[
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:ar",
"OCR",
"Optical Character Recognition ",
"Arabic OCR",
"arabic ",
"ocr",
"Textline images",
"region:us"
] |
2023-09-11T11:01:17+00:00
|
{"language": ["ar"], "size_categories": ["1K<n<10K"], "task_categories": ["image-to-text"], "tags": ["OCR", "Optical Character Recognition ", "Arabic OCR", "arabic ", "ocr", "Textline images"]}
|
2023-09-12T12:05:06+00:00
|
[] |
[
"ar"
] |
TAGS
#task_categories-image-to-text #size_categories-1K<n<10K #language-Arabic #OCR #Optical Character Recognition #Arabic OCR #arabic #ocr #Textline images #region-us
|
# KFUPM Handwritten Arabic TexT (KHATT) database
### Version 1.0 (September 2012 Release)
The database contains handwritten Arabic text images and its ground-truth developed for
research in the area of Arabic handwritten text. It contains the line images and their ground-truth. It was used for the pilot experimentation as reported in the paper: <ins> S. A. Mahmoud, I. Ahmad, M. Alshayeb, W. G. Al-Khatib, M. T. Parvez, G. A. Fink, V. Margner, and H. EL Abed, “KHATT: Arabic Offline
Handwritten Text Database </ins>, In Proceedings of the 13th International Conference on Frontiers in Handwriting Recognition (ICFHR 2012), Bari, Italy, 2012, pp. 447-452, IEEE Computer Society.
|
[
"# KFUPM Handwritten Arabic TexT (KHATT) database",
"### Version 1.0 (September 2012 Release)\n\nThe database contains handwritten Arabic text images and its ground-truth developed for \nresearch in the area of Arabic handwritten text. It contains the line images and their ground-truth. It was used for the pilot experimentation as reported in the paper: <ins> S. A. Mahmoud, I. Ahmad, M. Alshayeb, W. G. Al-Khatib, M. T. Parvez, G. A. Fink, V. Margner, and H. EL Abed, “KHATT: Arabic Offline \nHandwritten Text Database </ins>, In Proceedings of the 13th International Conference on Frontiers in Handwriting Recognition (ICFHR 2012), Bari, Italy, 2012, pp. 447-452, IEEE Computer Society."
] |
[
"TAGS\n#task_categories-image-to-text #size_categories-1K<n<10K #language-Arabic #OCR #Optical Character Recognition #Arabic OCR #arabic #ocr #Textline images #region-us \n",
"# KFUPM Handwritten Arabic TexT (KHATT) database",
"### Version 1.0 (September 2012 Release)\n\nThe database contains handwritten Arabic text images and its ground-truth developed for \nresearch in the area of Arabic handwritten text. It contains the line images and their ground-truth. It was used for the pilot experimentation as reported in the paper: <ins> S. A. Mahmoud, I. Ahmad, M. Alshayeb, W. G. Al-Khatib, M. T. Parvez, G. A. Fink, V. Margner, and H. EL Abed, “KHATT: Arabic Offline \nHandwritten Text Database </ins>, In Proceedings of the 13th International Conference on Frontiers in Handwriting Recognition (ICFHR 2012), Bari, Italy, 2012, pp. 447-452, IEEE Computer Society."
] |
[
62,
15,
181
] |
[
"passage: TAGS\n#task_categories-image-to-text #size_categories-1K<n<10K #language-Arabic #OCR #Optical Character Recognition #Arabic OCR #arabic #ocr #Textline images #region-us \n# KFUPM Handwritten Arabic TexT (KHATT) database### Version 1.0 (September 2012 Release)\n\nThe database contains handwritten Arabic text images and its ground-truth developed for \nresearch in the area of Arabic handwritten text. It contains the line images and their ground-truth. It was used for the pilot experimentation as reported in the paper: <ins> S. A. Mahmoud, I. Ahmad, M. Alshayeb, W. G. Al-Khatib, M. T. Parvez, G. A. Fink, V. Margner, and H. EL Abed, “KHATT: Arabic Offline \nHandwritten Text Database </ins>, In Proceedings of the 13th International Conference on Frontiers in Handwriting Recognition (ICFHR 2012), Bari, Italy, 2012, pp. 447-452, IEEE Computer Society."
] |
0becc08dc1bdcdf34fbe4540629dcd2e5a874f80
|
# Structured Anonymous Career Paths extracted from Resumes
## Dataset Description
- **Homepage:** coming soon
- **Repository:** coming soon
- **Paper:** coming soon
- **Point of Contact:** [email protected]
### Dataset Summary
This dataset contains 2164 anonymous career paths across 24 differend industries.
Each work experience is tagger with their corresponding ESCO occupation (ESCO v1.1.1).
### Languages
We use the English version of ESCO.
All resume data is in English as well.
## Dataset Structure
Each working history contains up to 17 experiences.
They appear in order, and each experience has a title, description, start, and, ESCO uri and ESCO title field.
### Citation Information
[More Information Needed]
|
jensjorisdecorte/anonymous-working-histories
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"Career Path Prediction",
"region:us"
] |
2023-09-11T11:08:07+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Synthetic ESCO skill sentences", "tags": ["Career Path Prediction"]}
|
2023-09-11T14:09:41+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #Career Path Prediction #region-us
|
# Structured Anonymous Career Paths extracted from Resumes
## Dataset Description
- Homepage: coming soon
- Repository: coming soon
- Paper: coming soon
- Point of Contact: jensjoris@URL
### Dataset Summary
This dataset contains 2164 anonymous career paths across 24 differend industries.
Each work experience is tagger with their corresponding ESCO occupation (ESCO v1.1.1).
### Languages
We use the English version of ESCO.
All resume data is in English as well.
## Dataset Structure
Each working history contains up to 17 experiences.
They appear in order, and each experience has a title, description, start, and, ESCO uri and ESCO title field.
|
[
"# Structured Anonymous Career Paths extracted from Resumes",
"## Dataset Description\n\n- Homepage: coming soon\n- Repository: coming soon\n- Paper: coming soon\n- Point of Contact: jensjoris@URL",
"### Dataset Summary\n\nThis dataset contains 2164 anonymous career paths across 24 differend industries.\nEach work experience is tagger with their corresponding ESCO occupation (ESCO v1.1.1).",
"### Languages\n\nWe use the English version of ESCO.\nAll resume data is in English as well.",
"## Dataset Structure\n\nEach working history contains up to 17 experiences.\nThey appear in order, and each experience has a title, description, start, and, ESCO uri and ESCO title field."
] |
[
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #Career Path Prediction #region-us \n",
"# Structured Anonymous Career Paths extracted from Resumes",
"## Dataset Description\n\n- Homepage: coming soon\n- Repository: coming soon\n- Paper: coming soon\n- Point of Contact: jensjoris@URL",
"### Dataset Summary\n\nThis dataset contains 2164 anonymous career paths across 24 differend industries.\nEach work experience is tagger with their corresponding ESCO occupation (ESCO v1.1.1).",
"### Languages\n\nWe use the English version of ESCO.\nAll resume data is in English as well.",
"## Dataset Structure\n\nEach working history contains up to 17 experiences.\nThey appear in order, and each experience has a title, description, start, and, ESCO uri and ESCO title field."
] |
[
49,
15,
32,
45,
22,
43
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #Career Path Prediction #region-us \n# Structured Anonymous Career Paths extracted from Resumes## Dataset Description\n\n- Homepage: coming soon\n- Repository: coming soon\n- Paper: coming soon\n- Point of Contact: jensjoris@URL### Dataset Summary\n\nThis dataset contains 2164 anonymous career paths across 24 differend industries.\nEach work experience is tagger with their corresponding ESCO occupation (ESCO v1.1.1).### Languages\n\nWe use the English version of ESCO.\nAll resume data is in English as well.## Dataset Structure\n\nEach working history contains up to 17 experiences.\nThey appear in order, and each experience has a title, description, start, and, ESCO uri and ESCO title field."
] |
c2b086e3606a7a0276588d593d36973f925517bd
|
# Dataset of kanzaki_ranko/神崎蘭子 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of kanzaki_ranko/神崎蘭子 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `grey_hair, red_eyes, long_hair, twintails, drill_hair, twin_drills, ribbon, breasts, hair_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 653.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kanzaki_ranko_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 420.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kanzaki_ranko_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1265 | 897.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kanzaki_ranko_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 595.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kanzaki_ranko_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1265 | 1.14 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kanzaki_ranko_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kanzaki_ranko_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, bare_shoulders, hair_flower, looking_at_viewer, navel, solo, white_dress, white_thighhighs, bangs, cleavage, blush, bow, medium_breasts, sitting, smile, hair_between_eyes, hairband, jewelry, lace-trimmed_legwear, pink_flower, sleeveless_dress, white_rose |
| 1 | 6 |  |  |  |  |  | 1girl, black_dress, frills, gothic_lolita, hair_bow, long_sleeves, looking_at_viewer, solo, black_bow, choker, :d, bangs, open_mouth, upper_body, black_ribbon, collarbone, simple_background, white_background |
| 2 | 9 |  |  |  |  |  | 1girl, gothic_lolita, solo, black_pantyhose, smile, looking_at_viewer, parasol, black_dress, frills |
| 3 | 8 |  |  |  |  |  | 1girl, gothic_lolita, smile, solo, dress, parasol, choker, open_mouth |
| 4 | 5 |  |  |  |  |  | 1girl, book, pantyhose, solo, gothic_lolita, looking_at_viewer, open_mouth, smile, blush, dress |
| 5 | 9 |  |  |  |  |  | 1girl, solo, hair_flower, smile, wings, looking_at_viewer, bare_shoulders, blush, detached_sleeves |
| 6 | 9 |  |  |  |  |  | 1girl, dress, flower, open_mouth, solo, smile, thighhighs, hair_ornament, hat, petals, bare_shoulders, detached_sleeves |
| 7 | 5 |  |  |  |  |  | 1girl, blush, open_mouth, smile, solo, dress, looking_at_viewer, mini_crown |
| 8 | 12 |  |  |  |  |  | 1girl, solo, horns, gloves, mini_crown, thighhighs, wings, medium_breasts, bandages, open_mouth, cleavage, :d |
| 9 | 5 |  |  |  |  |  | 1girl, fishnet_gloves, gothic_lolita, hair_down, long_sleeves, solo, black_dress, looking_at_viewer, butterfly_on_hand, :d, earrings, mini_hat, open_mouth, white_background |
| 10 | 5 |  |  |  |  |  | 1girl, smile, solo, striped_thighhighs, white_gloves, capelet, dress, looking_at_viewer, simple_background, bow, frills, open_mouth, white_background |
| 11 | 16 |  |  |  |  |  | 1girl, solo, elbow_gloves, medium_breasts, blush, cleavage, looking_at_viewer, black_bikini, smile, detached_collar, navel, black_thighhighs, frills, lolita_hairband |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | hair_flower | looking_at_viewer | navel | solo | white_dress | white_thighhighs | bangs | cleavage | blush | bow | medium_breasts | sitting | smile | hair_between_eyes | hairband | jewelry | lace-trimmed_legwear | pink_flower | sleeveless_dress | white_rose | black_dress | frills | gothic_lolita | hair_bow | long_sleeves | black_bow | choker | :d | open_mouth | upper_body | black_ribbon | collarbone | simple_background | white_background | black_pantyhose | parasol | dress | book | pantyhose | wings | detached_sleeves | flower | thighhighs | hair_ornament | hat | petals | mini_crown | horns | gloves | bandages | fishnet_gloves | hair_down | butterfly_on_hand | earrings | mini_hat | striped_thighhighs | white_gloves | capelet | elbow_gloves | black_bikini | detached_collar | black_thighhighs | lolita_hairband |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:--------------|:--------------------|:--------|:-------|:--------------|:-------------------|:--------|:-----------|:--------|:------|:-----------------|:----------|:--------|:--------------------|:-----------|:----------|:-----------------------|:--------------|:-------------------|:-------------|:--------------|:---------|:----------------|:-----------|:---------------|:------------|:---------|:-----|:-------------|:-------------|:---------------|:-------------|:--------------------|:-------------------|:------------------|:----------|:--------|:-------|:------------|:--------|:-------------------|:---------|:-------------|:----------------|:------|:---------|:-------------|:--------|:---------|:-----------|:-----------------|:------------|:--------------------|:-----------|:-----------|:---------------------|:---------------|:----------|:---------------|:---------------|:------------------|:-------------------|:------------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | X | | X | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | | | X | | X | | | | | | | | | X | | | | | | | | X | X | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | | | | X | | | | | | | | | X | | | | | | | | | | X | | | | X | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | X | | X | | | | | X | | | | X | | | | | | | | | | X | | | | | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | X | X | X | | X | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | X | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | X | | X | | | | | X | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | |
| 8 | 12 |  |  |  |  |  | X | | | | | X | | | | X | | | X | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | X | | | X | | | | X | X | X | X | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | | | X | | X | | | | | | | | | | | | | | | | | X | | X | | X | | | X | X | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | | | X | | X | | | | | | X | | | X | | | | | | | | | X | | | | | | | X | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | X | X | X | | | | | |
| 11 | 16 |  |  |  |  |  | X | | | X | X | X | | | | X | X | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X |
|
CyberHarem/kanzaki_ranko_idolmastercinderellagirls
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T11:20:28+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T10:13:22+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kanzaki\_ranko/神崎蘭子 (THE iDOLM@STER: Cinderella Girls)
=================================================================
This is the dataset of kanzaki\_ranko/神崎蘭子 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are 'grey\_hair, red\_eyes, long\_hair, twintails, drill\_hair, twin\_drills, ribbon, breasts, hair\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
12892671ce250b340270a81d4268efaed279a0e6
|
# Dataset of ryougi_shiki/両儀式/两仪式 (Fate/Grand Order)
This is the dataset of ryougi_shiki/両儀式/两仪式 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `short_hair, brown_hair, black_hair, blue_eyes, bangs, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 745.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ryougi_shiki_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 402.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ryougi_shiki_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1205 | 839.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ryougi_shiki_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 647.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ryougi_shiki_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1205 | 1.19 GiB | [Download](https://huggingface.co/datasets/CyberHarem/ryougi_shiki_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ryougi_shiki_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 |  |  |  |  |  | 1girl, katana, kimono, solo, obi |
| 1 | 19 |  |  |  |  |  | 1girl, holding_sword, katana, solo, looking_at_viewer, white_kimono, obiage, obijime, wide_sleeves |
| 2 | 9 |  |  |  |  |  | 1girl, holding_sword, katana, looking_at_viewer, obiage, obijime, solo, white_kimono, wide_sleeves, long_sleeves, simple_background, white_background, ahoge, barefoot, full_body, sheath |
| 3 | 8 |  |  |  |  |  | 1girl, kimono, obi, solo, smile |
| 4 | 10 |  |  |  |  |  | 1girl, knife, red_jacket, solo, blue_kimono, obi, reverse_grip, boots, multicolored_eyes |
| 5 | 42 |  |  |  |  |  | 1girl, long_sleeves, open_jacket, red_jacket, solo, blue_kimono, holding_knife, looking_at_viewer, closed_mouth, fur_trim, holding_weapon, obi |
| 6 | 15 |  |  |  |  |  | 1girl, floral_print, katana, kimono, solo, very_long_hair, hair_flower, holding_sword, crane_(animal), looking_at_viewer, obi, ahoge, sheath, wide_sleeves, animal_print |
| 7 | 13 |  |  |  |  |  | 1girl, kimono, solo, smile, floral_print, hair_flower, looking_at_viewer, ahoge, upper_body, very_long_hair, wide_sleeves, crane_(animal), white_background, animal_print, long_sleeves, parted_bangs, sash, simple_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | katana | kimono | solo | obi | holding_sword | looking_at_viewer | white_kimono | obiage | obijime | wide_sleeves | long_sleeves | simple_background | white_background | ahoge | barefoot | full_body | sheath | smile | knife | red_jacket | blue_kimono | reverse_grip | boots | multicolored_eyes | open_jacket | holding_knife | closed_mouth | fur_trim | holding_weapon | floral_print | very_long_hair | hair_flower | crane_(animal) | animal_print | upper_body | parted_bangs | sash |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------|:-------|:------|:----------------|:--------------------|:---------------|:---------|:----------|:---------------|:---------------|:--------------------|:-------------------|:--------|:-----------|:------------|:---------|:--------|:--------|:-------------|:--------------|:---------------|:--------|:--------------------|:--------------|:----------------|:---------------|:-----------|:-----------------|:---------------|:-----------------|:--------------|:-----------------|:---------------|:-------------|:---------------|:-------|
| 0 | 23 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 19 |  |  |  |  |  | X | X | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | X | X | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | | X | X | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | |
| 5 | 42 |  |  |  |  |  | X | | | X | X | | X | | | | | X | | | | | | | | | X | X | | | | X | X | X | X | X | | | | | | | | |
| 6 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | | | | X | | | | X | | | X | | | | | | | | | | | | | X | X | X | X | X | | | |
| 7 | 13 |  |  |  |  |  | X | | X | X | | | X | | | | X | X | X | X | X | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/ryougi_shiki_fgo
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T11:26:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-12T14:36:01+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of ryougi\_shiki/両儀式/两仪式 (Fate/Grand Order)
===================================================
This is the dataset of ryougi\_shiki/両儀式/两仪式 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are 'short\_hair, brown\_hair, black\_hair, blue\_eyes, bangs, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
b711ffc04c2fd7a45b35f384a5a7d168fc5e57ef
|
# Dataset Card for "artery-ultrasound-siit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Pavarissy/artery-ultrasound-siit
|
[
"region:us"
] |
2023-09-11T11:28:25+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 230791779.0, "num_examples": 100}], "download_size": 17454777, "dataset_size": 230791779.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T12:20:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "artery-ultrasound-siit"
More Information needed
|
[
"# Dataset Card for \"artery-ultrasound-siit\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"artery-ultrasound-siit\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"artery-ultrasound-siit\"\n\nMore Information needed"
] |
0678606659a6ec7f3e6260178408d93236c430bf
|
# Dataset of homika (Pokémon)
This is the dataset of homika (Pokémon), containing 500 images and their tags.
The core tags of this character are `white_hair, hair_ornament, blue_eyes, freckles, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 509.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/homika_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 310.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/homika_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1103 | 603.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/homika_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 453.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/homika_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1103 | 813.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/homika_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/homika_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, bass_guitar, hair_bobbles, striped_dress, topknot, boots, pokemon_(creature), open_mouth, smile |
| 1 | 7 |  |  |  |  |  | 1girl, bass_guitar, forehead, hair_bobbles, open_mouth, smile, solo, striped_dress, topknot, holding_instrument, looking_at_viewer, boots, teeth, two-tone_dress, strapless_dress |
| 2 | 7 |  |  |  |  |  | 1girl, bass_guitar, forehead, hair_bobbles, strapless_dress, striped_dress, topknot, holding_instrument, looking_at_viewer, open_mouth, smile, solo, tongue, plectrum, teeth, two-tone_dress, boots, platform_footwear, spiked_hair |
| 3 | 10 |  |  |  |  |  | 1girl, bass_guitar, boots, hair_bobbles, open_mouth, striped_dress, topknot, forehead, holding_instrument, pokemon_(creature), smile, spiked_hair, teeth, tongue, two-tone_dress, looking_at_viewer, plectrum |
| 4 | 5 |  |  |  |  |  | 1girl, hair_bobbles, looking_at_viewer, strapless_dress, striped_dress, tongue_out, topknot, bass_guitar, forehead, solo, platform_boots, two-tone_dress |
| 5 | 5 |  |  |  |  |  | 1girl, bass_guitar, hair_bobbles, holding_poke_ball, striped_dress, topknot, short_dress, solo, poke_ball_(basic), bra_strap, grin, platform_boots, standing |
| 6 | 5 |  |  |  |  |  | 1girl, forehead, hair_bobbles, pokemon_(creature), smile, striped_dress, topknot, open_mouth |
| 7 | 5 |  |  |  |  |  | 1girl, blush, hair_bobbles, kemonomimi_mode, topknot, cat_ears, cat_tail, open_mouth, solo, fang, striped |
| 8 | 11 |  |  |  |  |  | 1girl, hair_bobbles, navel, small_breasts, topknot, solo, blush, looking_at_viewer, nipples, bikini, nude, pussy, smile, full_body, spread_legs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bass_guitar | hair_bobbles | striped_dress | topknot | boots | pokemon_(creature) | open_mouth | smile | forehead | solo | holding_instrument | looking_at_viewer | teeth | two-tone_dress | strapless_dress | tongue | plectrum | platform_footwear | spiked_hair | tongue_out | platform_boots | holding_poke_ball | short_dress | poke_ball_(basic) | bra_strap | grin | standing | blush | kemonomimi_mode | cat_ears | cat_tail | fang | striped | navel | small_breasts | nipples | bikini | nude | pussy | full_body | spread_legs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------------|:----------------|:----------|:--------|:---------------------|:-------------|:--------|:-----------|:-------|:---------------------|:--------------------|:--------|:-----------------|:------------------|:---------|:-----------|:--------------------|:--------------|:-------------|:-----------------|:--------------------|:--------------|:--------------------|:------------|:-------|:-----------|:--------|:------------------|:-----------|:-----------|:-------|:----------|:--------|:----------------|:----------|:---------|:-------|:--------|:------------|:--------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | X | X | | X | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | X | | X | | | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | |
| 8 | 11 |  |  |  |  |  | X | | X | | X | | | | X | | X | | X | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/homika_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T11:28:28+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T19:04:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of homika (Pokémon)
===========================
This is the dataset of homika (Pokémon), containing 500 images and their tags.
The core tags of this character are 'white\_hair, hair\_ornament, blue\_eyes, freckles, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
af8148f2629b8fe987edc461cca39c82701dea8a
|
# Dataset Card for "pubmed_nonbiomedicalrap_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_nonbiomedicalrap_2
|
[
"region:us"
] |
2023-09-11T11:33:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "MedlineCitation", "struct": [{"name": "PMID", "dtype": "int32"}, {"name": "DateCompleted", "struct": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}, {"name": "NumberOfReferences", "dtype": "int32"}, {"name": "DateRevised", "struct": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}, {"name": "Article", "struct": [{"name": "Abstract", "struct": [{"name": "AbstractText", "dtype": "string"}]}, {"name": "ArticleTitle", "dtype": "string"}, {"name": "AuthorList", "struct": [{"name": "Author", "sequence": [{"name": "LastName", "dtype": "string"}, {"name": "ForeName", "dtype": "string"}, {"name": "Initials", "dtype": "string"}, {"name": "CollectiveName", "dtype": "string"}]}]}, {"name": "Language", "dtype": "string"}, {"name": "GrantList", "struct": [{"name": "Grant", "sequence": [{"name": "GrantID", "dtype": "string"}, {"name": "Agency", "dtype": "string"}, {"name": "Country", "dtype": "string"}]}]}, {"name": "PublicationTypeList", "struct": [{"name": "PublicationType", "sequence": "string"}]}]}, {"name": "MedlineJournalInfo", "struct": [{"name": "Country", "dtype": "string"}]}, {"name": "ChemicalList", "struct": [{"name": "Chemical", "sequence": [{"name": "RegistryNumber", "dtype": "string"}, {"name": "NameOfSubstance", "dtype": "string"}]}]}, {"name": "CitationSubset", "dtype": "string"}, {"name": "MeshHeadingList", "struct": [{"name": "MeshHeading", "sequence": [{"name": "DescriptorName", "dtype": "string"}, {"name": "QualifierName", "dtype": "string"}]}]}]}, {"name": "PubmedData", "struct": [{"name": "ArticleIdList", "sequence": [{"name": "ArticleId", "sequence": "string"}]}, {"name": "PublicationStatus", "dtype": "string"}, {"name": "History", "struct": [{"name": "PubMedPubDate", "sequence": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}]}, {"name": "ReferenceList", "sequence": [{"name": "Citation", "dtype": "string"}, {"name": "CitationId", "dtype": "int32"}]}]}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3906927.162, "num_examples": 999}], "download_size": 2127120, "dataset_size": 3906927.162}}
|
2023-09-11T11:33:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_nonbiomedicalrap_2"
More Information needed
|
[
"# Dataset Card for \"pubmed_nonbiomedicalrap_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_nonbiomedicalrap_2\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_nonbiomedicalrap_2\"\n\nMore Information needed"
] |
08167c6b01d8337d7273fb8f3b8ba8eec96514c5
|
# Dataset Card for "Arabic-samsum-dialogsum"
this dataset is comption between samsum and dialogsum dataset translated in arabic
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Arabic
## Dataset Structure
t
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 24732
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mohamedemam/Arabic-samsum-dialogsum
|
[
"task_categories:summarization",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:ar",
"license:cc-by-nc-2.0",
"arxiv:1911.12237",
"region:us"
] |
2023-09-11T11:48:44+00:00
|
{"language": ["ar"], "license": "cc-by-nc-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "conversational"], "pretty_name": "ar messum", "dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27913254, "num_examples": 24813}], "download_size": 13968520, "dataset_size": 27913254}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-11T13:35:29+00:00
|
[
"1911.12237"
] |
[
"ar"
] |
TAGS
#task_categories-summarization #task_categories-conversational #size_categories-10K<n<100K #language-Arabic #license-cc-by-nc-2.0 #arxiv-1911.12237 #region-us
|
# Dataset Card for "Arabic-samsum-dialogsum"
this dataset is comption between samsum and dialogsum dataset translated in arabic
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper: URL
- Leaderboard:
- Point of Contact:
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
### Languages
Arabic
## Dataset Structure
t
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 24732
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Contributions
Thanks to @cccntu for adding this dataset.
More Information needed
|
[
"# Dataset Card for \"Arabic-samsum-dialogsum\"\n\nthis dataset is comption between samsum and dialogsum dataset translated in arabic",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.\nThe SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).",
"### Supported Tasks and Leaderboards",
"### Languages\n\nArabic",
"## Dataset Structure\nt",
"### Data Instances\n\nThe created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people\n\nThe first instance in the training set:\n{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"}",
"### Data Fields\n\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique id of an example.",
"### Data Splits\n\n- train: 24732",
"## Dataset Creation",
"### Curation Rationale\n\nIn paper:\n> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.\nAs a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n In paper:\n> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.",
"#### Who are the source language producers?\n\nlinguists",
"### Annotations",
"#### Annotation process\n\nIn paper:\n> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.",
"#### Who are the annotators?\n\nlanguage experts",
"### Personal and Sensitive Information\n\nNone, see above: Initial Data Collection and Normalization",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nnon-commercial licence: CC BY-NC-ND 4.0",
"### Contributions\n\nThanks to @cccntu for adding this dataset.\nMore Information needed"
] |
[
"TAGS\n#task_categories-summarization #task_categories-conversational #size_categories-10K<n<100K #language-Arabic #license-cc-by-nc-2.0 #arxiv-1911.12237 #region-us \n",
"# Dataset Card for \"Arabic-samsum-dialogsum\"\n\nthis dataset is comption between samsum and dialogsum dataset translated in arabic",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.\nThe SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).",
"### Supported Tasks and Leaderboards",
"### Languages\n\nArabic",
"## Dataset Structure\nt",
"### Data Instances\n\nThe created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people\n\nThe first instance in the training set:\n{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"}",
"### Data Fields\n\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique id of an example.",
"### Data Splits\n\n- train: 24732",
"## Dataset Creation",
"### Curation Rationale\n\nIn paper:\n> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.\nAs a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n In paper:\n> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.",
"#### Who are the source language producers?\n\nlinguists",
"### Annotations",
"#### Annotation process\n\nIn paper:\n> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.",
"#### Who are the annotators?\n\nlanguage experts",
"### Personal and Sensitive Information\n\nNone, see above: Initial Data Collection and Normalization",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nnon-commercial licence: CC BY-NC-ND 4.0",
"### Contributions\n\nThanks to @cccntu for adding this dataset.\nMore Information needed"
] |
[
63,
36,
120,
26,
191,
10,
5,
7,
173,
31,
10,
5,
201,
4,
99,
12,
5,
83,
11,
22,
8,
7,
8,
7,
5,
6,
20,
20
] |
[
"passage: TAGS\n#task_categories-summarization #task_categories-conversational #size_categories-10K<n<100K #language-Arabic #license-cc-by-nc-2.0 #arxiv-1911.12237 #region-us \n# Dataset Card for \"Arabic-samsum-dialogsum\"\n\nthis dataset is comption between samsum and dialogsum dataset translated in arabic## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThe SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.\nThe SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).### Supported Tasks and Leaderboards### Languages\n\nArabic## Dataset Structure\nt",
"passage: ### Data Instances\n\nThe created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people\n\nThe first instance in the training set:\n{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"}### Data Fields\n\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique id of an example.### Data Splits\n\n- train: 24732## Dataset Creation### Curation Rationale\n\nIn paper:\n> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.\nAs a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.### Source Data#### Initial Data Collection and Normalization\n\n In paper:\n> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.#### Who are the source language producers?\n\nlinguists### Annotations#### Annotation process\n\nIn paper:\n> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.#### Who are the annotators?\n\nlanguage experts### Personal and Sensitive Information\n\nNone, see above: Initial Data Collection and Normalization## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases"
] |
d3ca21c4fd7b2adc6590cba8b32b5a690ba69584
|
# Dataset Card for Pokémon BLIP captions
_Dataset used to train [Pokémon text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from [FastGAN-pytorch](https://github.com/odegeasslbc/FastGAN-pytorch) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> a drawing of a green pokemon with red eyes

> a green and yellow toy with a red nose

> a red and white ball with an angry look on its face
## Citation
If you use this dataset, please cite it as:
```
@misc{pinkney2022pokemon,
author = {Pinkney, Justin N. M.},
title = {Pokemon BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/}}
}
```
|
polinaeterna/pokemon-blip-captions
|
[
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-09-11T11:57:25+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["huggan/few-shot-pokemon"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Pok\u00e9mon BLIP captions", "tags": [], "duplicated_from": "lambdalabs/pokemon-blip-captions", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119417305.0, "num_examples": 833}], "download_size": 0, "dataset_size": 119417305.0}}
|
2023-09-11T13:22:52+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for Pokémon BLIP captions
_Dataset used to train Pokémon text to image model_
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.
## Examples
!URL
> a drawing of a green pokemon with red eyes
!URL
> a green and yellow toy with a red nose
!URL
> a red and white ball with an angry look on its face
If you use this dataset, please cite it as:
|
[
"# Dataset Card for Pokémon BLIP captions\n\n_Dataset used to train Pokémon text to image model_\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.",
"## Examples\n\n\n!URL\n> a drawing of a green pokemon with red eyes\n\n!URL\n> a green and yellow toy with a red nose\n\n!URL\n> a red and white ball with an angry look on its face\n\nIf you use this dataset, please cite it as:"
] |
[
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for Pokémon BLIP captions\n\n_Dataset used to train Pokémon text to image model_\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.",
"## Examples\n\n\n!URL\n> a drawing of a green pokemon with red eyes\n\n!URL\n> a green and yellow toy with a red nose\n\n!URL\n> a red and white ball with an angry look on its face\n\nIf you use this dataset, please cite it as:"
] |
[
91,
150,
57
] |
[
"passage: TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #license-cc-by-nc-sa-4.0 #region-us \n# Dataset Card for Pokémon BLIP captions\n\n_Dataset used to train Pokémon text to image model_\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.## Examples\n\n\n!URL\n> a drawing of a green pokemon with red eyes\n\n!URL\n> a green and yellow toy with a red nose\n\n!URL\n> a red and white ball with an angry look on its face\n\nIf you use this dataset, please cite it as:"
] |
0750aff0c6bf1e6af36192aad2c3a2e9363577fb
|
# Dataset Card for The ASD QA Dataset (train set)
## Dataset Description
- **Repository:** https://github.com/vifirsanova/empi
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: https://aspergers.ru.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
https://aspergers.ru
### Dataset Curators
Victoria Firsanova
|
missvector/asd-qa-train
|
[
"license:mit",
"region:us"
] |
2023-09-11T12:04:38+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_end", "dtype": "int64"}, {"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "paragraph", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3060746, "num_examples": 2593}], "download_size": 450478, "dataset_size": 3060746}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-13T11:30:54+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for The ASD QA Dataset (train set)
## Dataset Description
- Repository: URL
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
URL
### Dataset Curators
Victoria Firsanova
|
[
"# Dataset Card for The ASD QA Dataset (train set)",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.",
"### Languages\n\nRussian",
"## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.",
"### Source Data\n\nURL",
"### Dataset Curators\n\nVictoria Firsanova"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for The ASD QA Dataset (train set)",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.",
"### Languages\n\nRussian",
"## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.",
"### Source Data\n\nURL",
"### Dataset Curators\n\nVictoria Firsanova"
] |
[
11,
17,
10,
52,
5,
18,
5,
10
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for The ASD QA Dataset (train set)## Dataset Description\n\n- Repository: URL### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.### Languages\n\nRussian## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.### Source Data\n\nURL### Dataset Curators\n\nVictoria Firsanova"
] |
5ed4e154f5a6d802d4c9fc6d75e3937e500e3108
|
# Dataset Card for "egw_quick_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gideonah/egw_quick_test
|
[
"region:us"
] |
2023-09-11T12:06:15+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10981, "num_examples": 9}], "download_size": 9434, "dataset_size": 10981}}
|
2023-09-11T12:06:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "egw_quick_test"
More Information needed
|
[
"# Dataset Card for \"egw_quick_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"egw_quick_test\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"egw_quick_test\"\n\nMore Information needed"
] |
717fe3a468a0bfda17b3bc3b03c0540522f2307d
|
# PVIT dataset
This is the stage 1 pretraining dataset of paper: [Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models](https://arxiv.org/abs/2308.13437).
## Model description
Position-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following [LLaVA](https://github.com/haotian-liu/LLaVA), we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.
For more details, please refer to our [paper](https://arxiv.org/abs/2308.13437) and [github repo](https://github.com/THUNLP-MT/PVIT).
## How to use
See [here](https://github.com/THUNLP-MT/PVIT#Train) for instructions of pretraining.
## Intended use
Primary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## BibTeX entry and citation info
```bibtex
@misc{chen2023positionenhanced,
title={Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models},
author={Chi Chen and Ruoyu Qin and Fuwen Luo and Xiaoyue Mi and Peng Li and Maosong Sun and Yang Liu},
year={2023},
eprint={2308.13437},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
PVIT/pvit_data_stage1
|
[
"license:cc-by-nc-4.0",
"arxiv:2308.13437",
"region:us"
] |
2023-09-11T12:07:20+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-09-19T02:54:56+00:00
|
[
"2308.13437"
] |
[] |
TAGS
#license-cc-by-nc-4.0 #arxiv-2308.13437 #region-us
|
# PVIT dataset
This is the stage 1 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.
## Model description
Position-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.
For more details, please refer to our paper and github repo.
## How to use
See here for instructions of pretraining.
## Intended use
Primary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## BibTeX entry and citation info
|
[
"# PVIT dataset\n\nThis is the stage 1 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.",
"## Model description\n\nPosition-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.\n\nFor more details, please refer to our paper and github repo.",
"## How to use\n\nSee here for instructions of pretraining.",
"## Intended use\n\nPrimary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2308.13437 #region-us \n",
"# PVIT dataset\n\nThis is the stage 1 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.",
"## Model description\n\nPosition-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.\n\nFor more details, please refer to our paper and github repo.",
"## How to use\n\nSee here for instructions of pretraining.",
"## Intended use\n\nPrimary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## BibTeX entry and citation info"
] |
[
25,
36,
230,
12,
65,
10
] |
[
"passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2308.13437 #region-us \n# PVIT dataset\n\nThis is the stage 1 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.## Model description\n\nPosition-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.\n\nFor more details, please refer to our paper and github repo.## How to use\n\nSee here for instructions of pretraining.## Intended use\n\nPrimary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.## BibTeX entry and citation info"
] |
ae7828f0fbaf5e4a08ea5bbd09b05924ef39a1c3
|
# Dataset Card for The ASD QA Dataset (validation set)
## Dataset Description
- **Repository:** https://github.com/vifirsanova/empi
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: https://aspergers.ru.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
https://aspergers.ru
### Dataset Curators
Victoria Firsanova
|
missvector/asd-qa-val
|
[
"license:mit",
"region:us"
] |
2023-09-11T12:13:47+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_end", "dtype": "int64"}, {"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "paragraph", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 316067, "num_examples": 261}], "download_size": 54962, "dataset_size": 316067}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-13T11:31:20+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for The ASD QA Dataset (validation set)
## Dataset Description
- Repository: URL
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
URL
### Dataset Curators
Victoria Firsanova
|
[
"# Dataset Card for The ASD QA Dataset (validation set)",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.",
"### Languages\n\nRussian",
"## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.",
"### Source Data\n\nURL",
"### Dataset Curators\n\nVictoria Firsanova"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for The ASD QA Dataset (validation set)",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.",
"### Languages\n\nRussian",
"## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.",
"### Source Data\n\nURL",
"### Dataset Curators\n\nVictoria Firsanova"
] |
[
11,
18,
10,
52,
5,
18,
5,
10
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for The ASD QA Dataset (validation set)## Dataset Description\n\n- Repository: URL### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.### Languages\n\nRussian## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.### Source Data\n\nURL### Dataset Curators\n\nVictoria Firsanova"
] |
f189d648fda10e02afbfd61ba48a065cc444e4f7
|
# Dataset Card for The ASD QA Dataset (test set)
## Dataset Description
- **Repository:** https://github.com/vifirsanova/empi
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: https://aspergers.ru.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
https://aspergers.ru
### Dataset Curators
Victoria Firsanova
|
missvector/asd-qa-test
|
[
"license:mit",
"region:us"
] |
2023-09-11T12:14:48+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_end", "dtype": "int64"}, {"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "paragraph", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1573377, "num_examples": 1284}], "download_size": 218618, "dataset_size": 1573377}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-13T11:31:42+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for The ASD QA Dataset (test set)
## Dataset Description
- Repository: URL
### Dataset Summary
A dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.
### Languages
Russian
## Dataset Structure
The dataset inherits SQuAD 2.0 structure.
### Source Data
URL
### Dataset Curators
Victoria Firsanova
|
[
"# Dataset Card for The ASD QA Dataset (test set)",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.",
"### Languages\n\nRussian",
"## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.",
"### Source Data\n\nURL",
"### Dataset Curators\n\nVictoria Firsanova"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for The ASD QA Dataset (test set)",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.",
"### Languages\n\nRussian",
"## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.",
"### Source Data\n\nURL",
"### Dataset Curators\n\nVictoria Firsanova"
] |
[
11,
16,
10,
52,
5,
18,
5,
10
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for The ASD QA Dataset (test set)## Dataset Description\n\n- Repository: URL### Dataset Summary\n\nA dataset for question-answering used for building an informational Russian language chatbot for the inclusion of people with autism spectrum disorder and Asperger syndrome in particular, based on data from the following website: URL.### Languages\n\nRussian## Dataset Structure\n\nThe dataset inherits SQuAD 2.0 structure.### Source Data\n\nURL### Dataset Curators\n\nVictoria Firsanova"
] |
acc0c9708b33a35562882021e6b148b722c8ca2b
|
# Dataset Card for "d609a098"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/d609a098
|
[
"region:us"
] |
2023-09-11T12:42:15+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 178, "num_examples": 10}], "download_size": 1312, "dataset_size": 178}}
|
2023-09-11T12:42:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "d609a098"
More Information needed
|
[
"# Dataset Card for \"d609a098\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"d609a098\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"d609a098\"\n\nMore Information needed"
] |
bbd1ee859c81a004ac3a406cbe38e82c5cba4fe6
|
# PVIT dataset
This is the stage 2 pretraining dataset of paper: [Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models](https://arxiv.org/abs/2308.13437).
## Model description
Position-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following [LLaVA](https://github.com/haotian-liu/LLaVA), we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.
For more details, please refer to our [paper](https://arxiv.org/abs/2308.13437) and [github repo](https://github.com/THUNLP-MT/PVIT).
## How to use
See [here](https://github.com/THUNLP-MT/PVIT#Train) for instructions of pretraining.
## Intended use
Primary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## BibTeX entry and citation info
```bibtex
@misc{chen2023positionenhanced,
title={Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models},
author={Chi Chen and Ruoyu Qin and Fuwen Luo and Xiaoyue Mi and Peng Li and Maosong Sun and Yang Liu},
year={2023},
eprint={2308.13437},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
PVIT/pvit_data_stage2
|
[
"license:cc-by-nc-4.0",
"arxiv:2308.13437",
"region:us"
] |
2023-09-11T12:44:57+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-09-19T02:55:19+00:00
|
[
"2308.13437"
] |
[] |
TAGS
#license-cc-by-nc-4.0 #arxiv-2308.13437 #region-us
|
# PVIT dataset
This is the stage 2 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.
## Model description
Position-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.
For more details, please refer to our paper and github repo.
## How to use
See here for instructions of pretraining.
## Intended use
Primary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## BibTeX entry and citation info
|
[
"# PVIT dataset\n\nThis is the stage 2 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.",
"## Model description\n\nPosition-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.\n\nFor more details, please refer to our paper and github repo.",
"## How to use\n\nSee here for instructions of pretraining.",
"## Intended use\n\nPrimary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2308.13437 #region-us \n",
"# PVIT dataset\n\nThis is the stage 2 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.",
"## Model description\n\nPosition-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.\n\nFor more details, please refer to our paper and github repo.",
"## How to use\n\nSee here for instructions of pretraining.",
"## Intended use\n\nPrimary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## BibTeX entry and citation info"
] |
[
25,
36,
230,
12,
65,
10
] |
[
"passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2308.13437 #region-us \n# PVIT dataset\n\nThis is the stage 2 pretraining dataset of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models.## Model description\n\nPosition-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following LLaVA, we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.\n\nFor more details, please refer to our paper and github repo.## How to use\n\nSee here for instructions of pretraining.## Intended use\n\nPrimary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.## BibTeX entry and citation info"
] |
81cf79fc6a19c645c67bdf99a8a1d43a20a3d7db
|
# Dataset Card for "TextCaps-VQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shwetkm/TextCaps-VQA
|
[
"region:us"
] |
2023-09-11T12:52:38+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "image_url", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "sentence_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8006904, "num_examples": 13895}], "download_size": 4140362, "dataset_size": 8006904}}
|
2023-11-11T03:23:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TextCaps-VQA"
More Information needed
|
[
"# Dataset Card for \"TextCaps-VQA\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TextCaps-VQA\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TextCaps-VQA\"\n\nMore Information needed"
] |
408f3feccda5d694bbeb46f37a177b7e4a4b0319
|
# Dataset of florence_nightingale/ナイチンゲール/南丁格尔 (Fate/Grand Order)
This is the dataset of florence_nightingale/ナイチンゲール/南丁格尔 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `long_hair, pink_hair, breasts, red_eyes, large_breasts, braid, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 714.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/florence_nightingale_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 389.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/florence_nightingale_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1206 | 811.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/florence_nightingale_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 624.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/florence_nightingale_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1206 | 1.13 GiB | [Download](https://huggingface.co/datasets/CyberHarem/florence_nightingale_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/florence_nightingale_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, looking_at_viewer, military_uniform, solo, white_gloves, skirt, belt, white_pantyhose, between_breasts |
| 1 | 13 |  |  |  |  |  | 1girl, black_coat, black_skirt, coat_on_shoulders, military_uniform, red_jacket, solo, white_gloves, white_pantyhose, bandage_over_one_eye, looking_at_viewer, pleated_skirt, belt, handgun, bandages, fur-trimmed_sleeves, long_sleeves |
| 2 | 17 |  |  |  |  |  | 1girl, bandage_over_one_eye, solo, looking_at_viewer, military_uniform, upper_body, bandages, red_jacket, coat_on_shoulders, white_gloves, black_coat, simple_background, white_background, long_sleeves, closed_mouth |
| 3 | 5 |  |  |  |  |  | 1girl, closed_mouth, military_uniform, red_jacket, solo, upper_body, braided_ponytail, looking_at_viewer, simple_background, white_background, long_sleeves, white_gloves, blush |
| 4 | 8 |  |  |  |  |  | 1girl, santa_hat, solo, looking_at_viewer, red_headwear, santa_costume, white_gloves, very_long_hair, fur-trimmed_sleeves, holding_gun, long_sleeves, fur-trimmed_headwear, pantyhose, red_jacket, blush, green_bow, night, skirt, sky |
| 5 | 33 |  |  |  |  |  | 1girl, green_gloves, official_alternate_costume, green_bikini, shrug_(clothing), nurse_cap, revealing_clothes, layered_bikini, purple_bikini, looking_at_viewer, navel, thigh_boots, cleavage, solo, green_thighhighs, thighhighs_under_boots, short_sleeves, black_footwear, garter_straps, holding, syringe, blush, black_headwear, purple_belt, black_skirt, elbow_gloves, microskirt, thighs |
| 6 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blue_sky, blush, day, folded_ponytail, navel, yellow_bikini, braided_ponytail, cleavage, cloud, collarbone, parted_lips, underboob, wet, bare_shoulders, closed_mouth, outdoors, smile, stomach, thighs, water |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | military_uniform | solo | white_gloves | skirt | belt | white_pantyhose | between_breasts | black_coat | black_skirt | coat_on_shoulders | red_jacket | bandage_over_one_eye | pleated_skirt | handgun | bandages | fur-trimmed_sleeves | long_sleeves | upper_body | simple_background | white_background | closed_mouth | braided_ponytail | blush | santa_hat | red_headwear | santa_costume | very_long_hair | holding_gun | fur-trimmed_headwear | pantyhose | green_bow | night | sky | green_gloves | official_alternate_costume | green_bikini | shrug_(clothing) | nurse_cap | revealing_clothes | layered_bikini | purple_bikini | navel | thigh_boots | cleavage | green_thighhighs | thighhighs_under_boots | short_sleeves | black_footwear | garter_straps | holding | syringe | black_headwear | purple_belt | elbow_gloves | microskirt | thighs | blue_sky | day | folded_ponytail | yellow_bikini | cloud | collarbone | parted_lips | underboob | wet | bare_shoulders | outdoors | smile | stomach | water |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------------------|:-------|:---------------|:--------|:-------|:------------------|:------------------|:-------------|:--------------|:--------------------|:-------------|:-----------------------|:----------------|:----------|:-----------|:----------------------|:---------------|:-------------|:--------------------|:-------------------|:---------------|:-------------------|:--------|:------------|:---------------|:----------------|:-----------------|:--------------|:-----------------------|:------------|:------------|:--------|:------|:---------------|:-----------------------------|:---------------|:-------------------|:------------|:--------------------|:-----------------|:----------------|:--------|:--------------|:-----------|:-------------------|:-------------------------|:----------------|:-----------------|:----------------|:----------|:----------|:-----------------|:--------------|:---------------|:-------------|:---------|:-----------|:------|:------------------|:----------------|:--------|:-------------|:--------------|:------------|:------|:-----------------|:-----------|:--------|:----------|:--------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | X | X | | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 17 |  |  |  |  |  | X | X | X | X | X | | | | | X | | X | X | X | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | | | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | | X | X | X | | | | | | | X | | | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 33 |  |  |  |  |  | X | X | | X | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | X | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/florence_nightingale_fgo
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T12:52:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-11T22:27:50+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of florence\_nightingale/ナイチンゲール/南丁格尔 (Fate/Grand Order)
================================================================
This is the dataset of florence\_nightingale/ナイチンゲール/南丁格尔 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are 'long\_hair, pink\_hair, breasts, red\_eyes, large\_breasts, braid, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0c5e6bc92091fcb8b1ce556ffaad7247002da583
|
# Dataset of shou/ショウ (Pokémon)
This is the dataset of shou/ショウ (Pokémon), containing 500 images and their tags.
The core tags of this character are `black_hair, long_hair, white_headwear, sidelocks, ponytail, grey_eyes, eyelashes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 575.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shou_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 323.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shou_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1129 | 656.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shou_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 505.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shou_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1129 | 942.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shou_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shou_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, hetero, nipples, penis, 1boy, blush, navel, open_mouth, red_scarf, sex, solo_focus, vaginal, spread_legs, head_scarf, cum_in_pussy, sweat, large_breasts, bar_censor, galaxy_expedition_team_survey_corps_uniform, on_back, symbol-shaped_pupils, thighhighs |
| 1 | 12 |  |  |  |  |  | 1girl, galaxy_expedition_team_survey_corps_uniform, head_scarf, red_scarf, black_shirt, blush, grey_jacket, looking_at_viewer, open_mouth, upper_body, solo, tongue, :d, sash, simple_background, white_background, logo |
| 2 | 6 |  |  |  |  |  | 1girl, black_shirt, floating_scarf, galaxy_expedition_team_survey_corps_uniform, grey_jacket, head_scarf, holding_poke_ball, red_scarf, sash, looking_at_viewer, solo, black_pantyhose, floating_hair, grey_skirt, loose_socks, open_mouth, logo, smile, tongue, upper_teeth_only |
| 3 | 5 |  |  |  |  |  | 1girl, :d, black_shirt, blush, head_scarf, holding_poke_ball, looking_at_viewer, open_mouth, red_scarf, tongue, grey_jacket, sash, black_pantyhose, grey_skirt, solo, bubble, hand_up, pokemon_(creature), white_background |
| 4 | 7 |  |  |  |  |  | 1girl, brown_footwear, grey_skirt, head_scarf, loose_socks, pokemon_(creature), shoes, black_shirt, galaxy_expedition_team_survey_corps_uniform, grey_jacket, black_pantyhose, open_mouth, red_scarf, :d, grass, standing, logo, outdoors |
| 5 | 6 |  |  |  |  |  | 1girl, head_scarf, red_scarf, simple_background, solo, white_background, portrait, black_eyes, galaxy_expedition_team_survey_corps_uniform, looking_at_viewer, open_mouth |
| 6 | 6 |  |  |  |  |  | 1girl, alternate_breast_size, galaxy_expedition_team_survey_corps_uniform, red_scarf, solo, blush, head_scarf, cleavage, gigantic_breasts, huge_breasts, thighs |
| 7 | 5 |  |  |  |  |  | 1girl, blue_eyes, blush, futanari, head_scarf, penis, red_scarf, solo, testicles, erection, female_pubic_hair, galaxy_expedition_team_survey_corps_uniform, precum, smile, uncensored, looking_at_viewer, arm_support, black_thighhighs, indoors, open_mouth, outdoors, sash, sitting, spread_legs |
| 8 | 6 |  |  |  |  |  | 1girl, blush, hetero, solo_focus, cum_on_breasts, galaxy_expedition_team_survey_corps_uniform, gangbang, head_scarf, multiple_boys, multiple_penises, nipples, facial, nude, red_scarf, bukkake, dark-skinned_male, double_handjob, fellatio, heart, interracial, uncensored |
| 9 | 6 |  |  |  |  |  | 1girl, head_scarf, solo, blush, nipples, open_mouth, after_sex, after_vaginal, anus, ass, cum_in_pussy, looking_at_viewer, black_thighhighs, completely_nude, cumdrip, from_behind, large_breasts, looking_back |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hetero | nipples | penis | 1boy | blush | navel | open_mouth | red_scarf | sex | solo_focus | vaginal | spread_legs | head_scarf | cum_in_pussy | sweat | large_breasts | bar_censor | galaxy_expedition_team_survey_corps_uniform | on_back | symbol-shaped_pupils | thighhighs | black_shirt | grey_jacket | looking_at_viewer | upper_body | solo | tongue | :d | sash | simple_background | white_background | logo | floating_scarf | holding_poke_ball | black_pantyhose | floating_hair | grey_skirt | loose_socks | smile | upper_teeth_only | bubble | hand_up | pokemon_(creature) | brown_footwear | shoes | grass | standing | outdoors | portrait | black_eyes | alternate_breast_size | cleavage | gigantic_breasts | huge_breasts | thighs | blue_eyes | futanari | testicles | erection | female_pubic_hair | precum | uncensored | arm_support | black_thighhighs | indoors | sitting | cum_on_breasts | gangbang | multiple_boys | multiple_penises | facial | nude | bukkake | dark-skinned_male | double_handjob | fellatio | heart | interracial | after_sex | after_vaginal | anus | ass | completely_nude | cumdrip | from_behind | looking_back |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:----------|:--------|:-------|:--------|:--------|:-------------|:------------|:------|:-------------|:----------|:--------------|:-------------|:---------------|:--------|:----------------|:-------------|:----------------------------------------------|:----------|:-----------------------|:-------------|:--------------|:--------------|:--------------------|:-------------|:-------|:---------|:-----|:-------|:--------------------|:-------------------|:-------|:-----------------|:--------------------|:------------------|:----------------|:-------------|:--------------|:--------|:-------------------|:---------|:----------|:---------------------|:-----------------|:--------|:--------|:-----------|:-----------|:-----------|:-------------|:------------------------|:-----------|:-------------------|:---------------|:---------|:------------|:-----------|:------------|:-----------|:--------------------|:---------|:-------------|:--------------|:-------------------|:----------|:----------|:-----------------|:-----------|:----------------|:-------------------|:---------|:-------|:----------|:--------------------|:-----------------|:-----------|:--------|:--------------|:------------|:----------------|:-------|:------|:------------------|:----------|:--------------|:---------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | | | | | X | | X | X | | | | | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | | | | X | X | | | | | X | | | | | X | | | | X | X | X | | X | X | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | X | | X | X | | | | | X | | | | | | | | | X | X | X | | X | X | X | X | | X | | | X | X | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | | | | | | X | X | | | | | X | | | | | X | | | | X | X | | | | | X | | | | X | | | X | | X | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | | | | | X | X | | | | | X | | | | | X | | | | | | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | X | | | X | | | | | X | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | X | | X | | X | X | | | | X | X | | | | | X | | | | | | X | | X | | | X | | | | | | | | | | X | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | X | X | | | X | | | X | | X | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | | X | | | X | | X | | | | | | X | X | | X | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/shou_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T12:53:47+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T19:50:23+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shou/ショウ (Pokémon)
=============================
This is the dataset of shou/ショウ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'black\_hair, long\_hair, white\_headwear, sidelocks, ponytail, grey\_eyes, eyelashes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6183306f5ba82c4318d518f4bd779cec6816b4a5
|
# Dataset Card for "model-sizer-bot-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/model-sizer-bot-stats
|
[
"region:us"
] |
2023-09-11T12:57:12+00:00
|
{"dataset_info": {"features": [{"name": "createdAt", "dtype": "timestamp[us]"}, {"name": "pr_number", "dtype": "int64"}, {"name": "status", "dtype": "large_string"}, {"name": "repo_id", "dtype": "large_string"}, {"name": "type", "dtype": "large_string"}, {"name": "isPullRequest", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 3465, "num_examples": 44}], "download_size": 0, "dataset_size": 3465}}
|
2023-09-11T13:01:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "model-sizer-bot-stats"
More Information needed
|
[
"# Dataset Card for \"model-sizer-bot-stats\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"model-sizer-bot-stats\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"model-sizer-bot-stats\"\n\nMore Information needed"
] |
140334ae2782de5cb744b7104c721f77fc0250d9
|
# Dataset Card for "Spider-SQL-LLAMA2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
philikai/Spider-SQL-LLAMA2_train
|
[
"size_categories:1K<n<10K",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-09-11T12:58:02+00:00
|
{"license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "db_id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "schema", "dtype": "string"}, {"name": "primary_keys", "dtype": "string"}, {"name": "foreign_keys", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12713675, "num_examples": 8659}, {"name": "validation", "num_bytes": 1169610, "num_examples": 1034}], "download_size": 619836, "dataset_size": 13883285}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2023-12-06T17:07:52+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-cc-by-sa-4.0 #region-us
|
# Dataset Card for "Spider-SQL-LLAMA2_train"
More Information needed
|
[
"# Dataset Card for \"Spider-SQL-LLAMA2_train\"\n\nMore Information needed"
] |
[
"TAGS\n#size_categories-1K<n<10K #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for \"Spider-SQL-LLAMA2_train\"\n\nMore Information needed"
] |
[
29,
21
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-cc-by-sa-4.0 #region-us \n# Dataset Card for \"Spider-SQL-LLAMA2_train\"\n\nMore Information needed"
] |
d3b5d270b3f1dda47dbda76b3a1739b22d8fa4e6
|
# Dataset Card for "librarian-bot-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/librarian-bot-stats
|
[
"region:us"
] |
2023-09-11T12:58:25+00:00
|
{"dataset_info": {"features": [{"name": "createdAt", "dtype": "timestamp[us]"}, {"name": "pr_number", "dtype": "int64"}, {"name": "status", "dtype": "large_string"}, {"name": "repo_id", "dtype": "large_string"}, {"name": "type", "dtype": "large_string"}, {"name": "isPullRequest", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 297708, "num_examples": 3416}], "download_size": 123005, "dataset_size": 297708}}
|
2023-09-11T15:49:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "librarian-bot-stats"
More Information needed
|
[
"# Dataset Card for \"librarian-bot-stats\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"librarian-bot-stats\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"librarian-bot-stats\"\n\nMore Information needed"
] |
baea6d327bb6b98165b773802e1336f4aff82e85
|
# Dataset Card for "DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Photolens/DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted
|
[
"region:us"
] |
2023-09-11T13:02:09+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22432780, "num_examples": 49920}], "download_size": 9066390, "dataset_size": 22432780}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-11T13:02:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted"
More Information needed
|
[
"# Dataset Card for \"DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted\"\n\nMore Information needed"
] |
[
6,
37
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted\"\n\nMore Information needed"
] |
7538d7cd036e888aed6d122c3983bb8c51a62b27
|
# arXiv Figures Dataset
This dataset contains image-text pairs extracted from figures from papers published until the end of 2020 in the [arXiv](https://arxiv.org) repository. The dataset can be used to train [CLIP](https://arxiv.org/abs/2103.00020) models.
This repo contains a [Parquet](https://parquet.apache.org/) file containing the metadata of a [WebDataset](https://github.com/webdataset/webdataset) in [img2dataset](https://github.com/rom1504/img2dataset) format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so [img2dataset](https://github.com/rom1504/img2dataset) cannot be used as is to retrieve the data. Instead, the paper id (e.g. 2103.00020) and file name (e.g. gr3.jpg) are provided as identifier for each sample. The papers themselves can be downloaded from the [arXiv S3 bucket](https://info.arxiv.org/help/bulk_data_s3.html).
Furthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the [DataComp](https://datacomp.ai) evaluation data. This file can be used to decontaminate the dataset.
|
nopperl/arxiv-image-text
|
[
"license:pddl",
"arxiv:2103.00020",
"region:us"
] |
2023-09-11T13:02:31+00:00
|
{"license": "pddl"}
|
2023-11-09T22:59:21+00:00
|
[
"2103.00020"
] |
[] |
TAGS
#license-pddl #arxiv-2103.00020 #region-us
|
# arXiv Figures Dataset
This dataset contains image-text pairs extracted from figures from papers published until the end of 2020 in the arXiv repository. The dataset can be used to train CLIP models.
This repo contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. 2103.00020) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the arXiv S3 bucket.
Furthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset.
|
[
"# arXiv Figures Dataset\n\nThis dataset contains image-text pairs extracted from figures from papers published until the end of 2020 in the arXiv repository. The dataset can be used to train CLIP models.\n\nThis repo contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. 2103.00020) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the arXiv S3 bucket.\n\nFurthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset."
] |
[
"TAGS\n#license-pddl #arxiv-2103.00020 #region-us \n",
"# arXiv Figures Dataset\n\nThis dataset contains image-text pairs extracted from figures from papers published until the end of 2020 in the arXiv repository. The dataset can be used to train CLIP models.\n\nThis repo contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. 2103.00020) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the arXiv S3 bucket.\n\nFurthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset."
] |
[
21,
227
] |
[
"passage: TAGS\n#license-pddl #arxiv-2103.00020 #region-us \n# arXiv Figures Dataset\n\nThis dataset contains image-text pairs extracted from figures from papers published until the end of 2020 in the arXiv repository. The dataset can be used to train CLIP models.\n\nThis repo contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. 2103.00020) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the arXiv S3 bucket.\n\nFurthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset."
] |
5f0278a7a90a8dbcc0238ed8578d21b7f513e329
|
# PubMed Central Figures Dataset
This dataset contains image-text pairs extracted from figures from papers in the [PubMed Central](https://www.ncbi.nlm.nih.gov/pmc/) repository. The dataset can be used to train [CLIP](https://arxiv.org/abs/2103.00020) models.
This repo contains contains a [Parquet](https://parquet.apache.org/) file containing the metadata of a [WebDataset](https://github.com/webdataset/webdataset) in [img2dataset](https://github.com/rom1504/img2dataset) format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so [img2dataset](https://github.com/rom1504/img2dataset) cannot be used as is to retrieve the data. Instead, the paper id (e.g. PMC7202302) and file name (e.g. gr3.jpg) are provided as identifier for each sample. The papers themselves can be downloaded from the [FTP server](https://www.ncbi.nlm.nih.gov/pmc/tools/ftp/).
Furthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the [DataComp](https://datacomp.ai) evaluation data. This file can be used to decontaminate the dataset.
|
nopperl/pmc-image-text
|
[
"license:pddl",
"arxiv:2103.00020",
"region:us"
] |
2023-09-11T13:03:42+00:00
|
{"license": "pddl"}
|
2023-11-09T23:02:04+00:00
|
[
"2103.00020"
] |
[] |
TAGS
#license-pddl #arxiv-2103.00020 #region-us
|
# PubMed Central Figures Dataset
This dataset contains image-text pairs extracted from figures from papers in the PubMed Central repository. The dataset can be used to train CLIP models.
This repo contains contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. PMC7202302) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the FTP server.
Furthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset.
|
[
"# PubMed Central Figures Dataset\n \nThis dataset contains image-text pairs extracted from figures from papers in the PubMed Central repository. The dataset can be used to train CLIP models.\n\nThis repo contains contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. PMC7202302) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the FTP server.\n\nFurthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset."
] |
[
"TAGS\n#license-pddl #arxiv-2103.00020 #region-us \n",
"# PubMed Central Figures Dataset\n \nThis dataset contains image-text pairs extracted from figures from papers in the PubMed Central repository. The dataset can be used to train CLIP models.\n\nThis repo contains contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. PMC7202302) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the FTP server.\n\nFurthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset."
] |
[
21,
220
] |
[
"passage: TAGS\n#license-pddl #arxiv-2103.00020 #region-us \n# PubMed Central Figures Dataset\n \nThis dataset contains image-text pairs extracted from figures from papers in the PubMed Central repository. The dataset can be used to train CLIP models.\n\nThis repo contains contains a Parquet file containing the metadata of a WebDataset in img2dataset format. The images themselves are not distributed and need to be retrieved. Note that the images cannot be retrieved by an HTTP URL, so img2dataset cannot be used as is to retrieve the data. Instead, the paper id (e.g. PMC7202302) and file name (e.g. URL) are provided as identifier for each sample. The papers themselves can be downloaded from the FTP server.\n\nFurthermore, the repo contains a NumPy file which contains the uid of all samples that are not considered duplicates to the DataComp evaluation data. This file can be used to decontaminate the dataset."
] |
2664c3a170a5bcc613715f8adad9eee8fbb8c031
|
# Dataset Card for "subtitles_test_set"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joaosanches/subtitles_test_set
|
[
"region:us"
] |
2023-09-11T13:05:18+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "meta", "struct": [{"name": "year", "dtype": "uint32"}, {"name": "imdbId", "dtype": "uint32"}, {"name": "subtitleId", "struct": [{"name": "pt", "dtype": "uint32"}, {"name": "pt_br", "dtype": "uint32"}]}, {"name": "sentenceIds", "struct": [{"name": "pt", "sequence": "uint32"}, {"name": "pt_br", "sequence": "uint32"}]}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3722834.818439022, "num_examples": 31746}], "download_size": 2921991, "dataset_size": 3722834.818439022}}
|
2023-09-11T13:05:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "subtitles_test_set"
More Information needed
|
[
"# Dataset Card for \"subtitles_test_set\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"subtitles_test_set\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"subtitles_test_set\"\n\nMore Information needed"
] |
6d36f4bc7a1a32347f85cf0075f1cf16d521622e
|
# Dataset Card for "eurlex-57k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pietrolesci/eurlex-57k
|
[
"region:us"
] |
2023-09-11T13:09:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "embedding_all-MiniLM-L12-v2", "data_files": [{"split": "train", "path": "embedding_all-MiniLM-L12-v2/train-*"}, {"split": "validation", "path": "embedding_all-MiniLM-L12-v2/validation-*"}, {"split": "test", "path": "embedding_all-MiniLM-L12-v2/test-*"}]}, {"config_name": "embedding_all-mpnet-base-v2", "data_files": [{"split": "train", "path": "embedding_all-mpnet-base-v2/train-*"}, {"split": "validation", "path": "embedding_all-mpnet-base-v2/validation-*"}, {"split": "test", "path": "embedding_all-mpnet-base-v2/test-*"}]}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "data_files": [{"split": "train", "path": "embedding_multi-qa-mpnet-base-dot-v1/train-*"}, {"split": "validation", "path": "embedding_multi-qa-mpnet-base-dot-v1/validation-*"}, {"split": "test", "path": "embedding_multi-qa-mpnet-base-dot-v1/test-*"}]}, {"config_name": "eurovoc_concepts", "data_files": [{"split": "train", "path": "eurovoc_concepts/train-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "celex_id", "dtype": "string"}, {"name": "document_type", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "header", "dtype": "string"}, {"name": "recitals", "dtype": "string"}, {"name": "main_body", "sequence": "string"}, {"name": "eurovoc_concepts", "sequence": "string"}, {"name": "text", "dtype": "string"}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 269684150, "num_examples": 45000}, {"name": "validation", "num_bytes": 35266624, "num_examples": 6000}, {"name": "test", "num_bytes": 35621361, "num_examples": 6000}], "download_size": 0, "dataset_size": 340572135}, {"config_name": "embedding_all-MiniLM-L12-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-MiniLM-L12-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 69660000, "num_examples": 45000}, {"name": "validation", "num_bytes": 9288000, "num_examples": 6000}, {"name": "test", "num_bytes": 9288000, "num_examples": 6000}], "download_size": 123441408, "dataset_size": 88236000}, {"config_name": "embedding_all-mpnet-base-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 138780000, "num_examples": 45000}, {"name": "validation", "num_bytes": 18504000, "num_examples": 6000}, {"name": "test", "num_bytes": 18504000, "num_examples": 6000}], "download_size": 211031101, "dataset_size": 175788000}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_multi-qa-mpnet-base-dot-v1", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 138780000, "num_examples": 45000}, {"name": "validation", "num_bytes": 18504000, "num_examples": 6000}, {"name": "test", "num_bytes": 18504000, "num_examples": 6000}], "download_size": 211029593, "dataset_size": 175788000}, {"config_name": "eurovoc_concepts", "features": [{"name": "concept_id", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 205049, "num_examples": 7201}], "download_size": 157326, "dataset_size": 205049}]}
|
2023-09-11T13:32:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eurlex-57k"
More Information needed
|
[
"# Dataset Card for \"eurlex-57k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eurlex-57k\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eurlex-57k\"\n\nMore Information needed"
] |
00b3e75d8f63eda7d944c9629694bc19c687218b
|
# Dataset Card for "DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Photolens/DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText
|
[
"region:us"
] |
2023-09-11T13:15:10+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23407332, "num_examples": 51332}], "download_size": 9565869, "dataset_size": 23407332}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-11T15:03:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText"
More Information needed
|
[
"# Dataset Card for \"DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText\"\n\nMore Information needed"
] |
[
6,
45
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText\"\n\nMore Information needed"
] |
20425dee03788fe1f9fad02c8c331f5e6d84da0f
|
# Dataset Card for Evaluation run of CobraMamba/mamba-gpt-3b-v4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CobraMamba/mamba-gpt-3b-v4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CobraMamba/mamba-gpt-3b-v4](https://huggingface.co/CobraMamba/mamba-gpt-3b-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CobraMamba__mamba-gpt-3b-v4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T00:01:02.690756](https://huggingface.co/datasets/open-llm-leaderboard/details_CobraMamba__mamba-gpt-3b-v4/blob/main/results_2023-10-25T00-01-02.690756.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.011954697986577181,
"em_stderr": 0.0011130056898859247,
"f1": 0.0627841862416108,
"f1_stderr": 0.0016440985205687317,
"acc": 0.3325355902710252,
"acc_stderr": 0.007798820060438671
},
"harness|drop|3": {
"em": 0.011954697986577181,
"em_stderr": 0.0011130056898859247,
"f1": 0.0627841862416108,
"f1_stderr": 0.0016440985205687317
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.00226753710225448
},
"harness|winogrande|5": {
"acc": 0.6582478295185478,
"acc_stderr": 0.013330103018622863
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CobraMamba__mamba-gpt-3b-v4
|
[
"region:us"
] |
2023-09-11T13:17:41+00:00
|
{"pretty_name": "Evaluation run of CobraMamba/mamba-gpt-3b-v4", "dataset_summary": "Dataset automatically created during the evaluation run of model [CobraMamba/mamba-gpt-3b-v4](https://huggingface.co/CobraMamba/mamba-gpt-3b-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CobraMamba__mamba-gpt-3b-v4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T00:01:02.690756](https://huggingface.co/datasets/open-llm-leaderboard/details_CobraMamba__mamba-gpt-3b-v4/blob/main/results_2023-10-25T00-01-02.690756.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.011954697986577181,\n \"em_stderr\": 0.0011130056898859247,\n \"f1\": 0.0627841862416108,\n \"f1_stderr\": 0.0016440985205687317,\n \"acc\": 0.3325355902710252,\n \"acc_stderr\": 0.007798820060438671\n },\n \"harness|drop|3\": {\n \"em\": 0.011954697986577181,\n \"em_stderr\": 0.0011130056898859247,\n \"f1\": 0.0627841862416108,\n \"f1_stderr\": 0.0016440985205687317\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.00226753710225448\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6582478295185478,\n \"acc_stderr\": 0.013330103018622863\n }\n}\n```", "repo_url": "https://huggingface.co/CobraMamba/mamba-gpt-3b-v4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|arc:challenge|25_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T00_01_02.690756", "path": ["**/details_harness|drop|3_2023-10-25T00-01-02.690756.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T00-01-02.690756.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T00_01_02.690756", "path": ["**/details_harness|gsm8k|5_2023-10-25T00-01-02.690756.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T00-01-02.690756.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hellaswag|10_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T14-17-28.228620.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T14-17-28.228620.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T14-17-28.228620.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T00_01_02.690756", "path": ["**/details_harness|winogrande|5_2023-10-25T00-01-02.690756.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T00-01-02.690756.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T14_17_28.228620", "path": ["results_2023-09-11T14-17-28.228620.parquet"]}, {"split": "2023_10_25T00_01_02.690756", "path": ["results_2023-10-25T00-01-02.690756.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T00-01-02.690756.parquet"]}]}]}
|
2023-10-24T23:01:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CobraMamba/mamba-gpt-3b-v4
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CobraMamba/mamba-gpt-3b-v4 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T00:01:02.690756(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CobraMamba/mamba-gpt-3b-v4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CobraMamba/mamba-gpt-3b-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T00:01:02.690756(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CobraMamba/mamba-gpt-3b-v4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CobraMamba/mamba-gpt-3b-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T00:01:02.690756(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CobraMamba/mamba-gpt-3b-v4## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CobraMamba/mamba-gpt-3b-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T00:01:02.690756(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
360fd8fe1b090553caf0e6271b8e32fc093a5d83
|
# 🦣 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
MathInstruct is a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH/](https://tiger-ai-lab.github.io/MAmmoTH/)
Paper: [https://arxiv.org/pdf/2309.05653.pdf](https://arxiv.org/pdf/2309.05653.pdf)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
Models:
| | **Base Model: Llama-2** | **Base Model: Code Llama** |
|-----|---------------------------------------------------------------|--------------------------------------------------------------------------|
| 7B | 🦣 [MAmmoTH-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 🦣 [MAmmoTH-Coder-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) |
| 13B | 🦣 [MAmmoTH-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | 🦣 [MAmmoTH-Coder-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)|
| 34B | - | 🦣 [MAmmoTH-Coder-34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)|
| 70B | 🦣 [MAmmoTH-70B](https://huggingface.co/TIGER-Lab/MAmmoTH-70B) | - |
## **License**
Please check out the license of each subset in our curated dataset MathInstruct.
| Dataset Name | License Type |
|--------------|----------------|
| GSM8K | MIT |
| GSM8K-RFT | Non listed |
| AQuA-RAT | Apache 2.0 |
| MATH | MIT |
| TheoremQA | MIT |
| Camel-Math | Attribution-NonCommercial 4.0 International |
| NumGLUE | Apache-2.0 |
| MathQA | Apache-2.0 |
| Our Curated | MIT |
## **Citation**
Please cite our paper if you use our data, model or code. Please also kindly cite the original dataset papers.
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
```
|
TIGER-Lab/MathInstruct
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"math",
"arxiv:2309.05653",
"region:us"
] |
2023-09-11T13:21:02+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "MathInstruct", "tags": ["math"]}
|
2023-11-20T02:58:18+00:00
|
[
"2309.05653"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #math #arxiv-2309.05653 #region-us
|
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
==========================================================================
MathInstruct is a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
Project Page: URL
Paper: URL
Code: URL
Models:
Base Model: Llama-2: 7B, Base Model: Code Llama: MAmmoTH-7B
Base Model: Llama-2: 13B, Base Model: Code Llama: MAmmoTH-13B
Base Model: Llama-2: 34B, Base Model: Code Llama: -
Base Model: Llama-2: 70B, Base Model: Code Llama: MAmmoTH-70B
License
-------
Please check out the license of each subset in our curated dataset MathInstruct.
Citation
--------
Please cite our paper if you use our data, model or code. Please also kindly cite the original dataset papers.
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #math #arxiv-2309.05653 #region-us \n"
] |
[
48
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #math #arxiv-2309.05653 #region-us \n"
] |
65b0426e0d2dd29decd4fef0c8b7eb0230c0ab7f
|
# Dataset of ichinose_shiki/一ノ瀬志希 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of ichinose_shiki/一ノ瀬志希 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `long_hair, blue_eyes, brown_hair, breasts, bangs, wavy_hair, ahoge, medium_breasts, earrings, hair_between_eyes, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 909.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ichinose_shiki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 456.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ichinose_shiki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1259 | 1003.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ichinose_shiki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 773.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ichinose_shiki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1259 | 1.53 GiB | [Download](https://huggingface.co/datasets/CyberHarem/ichinose_shiki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ichinose_shiki_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, blush, cleavage, simple_background, smile, solo, white_background, :3, collarbone, looking_at_viewer, off_shoulder, open_clothes, shirt, bare_shoulders, closed_mouth, tank_top, long_sleeves, blue_shorts, camisole, large_breasts, strap_slip, upper_body |
| 1 | 16 |  |  |  |  |  | 1girl, long_sleeves, smile, solo, white_shirt, looking_at_viewer, open_clothes, red_bowtie, simple_background, :3, blush, school_uniform, jewelry, plaid_bowtie, white_background, collared_shirt, cardigan, dress_shirt, jacket, pleated_skirt, upper_body, navel, off_shoulder |
| 2 | 5 |  |  |  |  |  | 1girl, bowtie, school_uniform, smile, solo, looking_at_viewer, open_clothes, pleated_skirt, simple_background, white_background, :3, cardigan, sleeves_past_wrists, white_shirt, cleavage, jewelry, red_bow |
| 3 | 20 |  |  |  |  |  | 1girl, looking_at_viewer, solo, smile, two_side_up, cleavage, navel, blush, hair_bow, bare_shoulders, skirt, wrist_cuffs, midriff, :3, heart, pearl_necklace, thighhighs, garter_straps |
| 4 | 5 |  |  |  |  |  | 1girl, blush, cleavage, large_breasts, looking_at_viewer, solo, two_side_up, :3, smile, black_panties, jewelry, open_mouth, black_thighhighs, garter_straps |
| 5 | 8 |  |  |  |  |  | 1girl, elbow_gloves, looking_at_viewer, solo, white_dress, white_gloves, cleavage, hair_bow, sleeveless_dress, smile, bare_shoulders, choker, jewelry, petals, blush, collarbone, :3, closed_mouth, high_heels, holding_flower, pink_dress, red_rose, ribbon, upper_body, white_footwear |
| 6 | 18 |  |  |  |  |  | 1girl, black_gloves, elbow_gloves, hair_bow, red_bow, solo, looking_at_viewer, red_dress, sleeveless_dress, smile, striped_bow, two_side_up, bare_shoulders, blush, brooch, simple_background, neck_ribbon, red_ribbon |
| 7 | 5 |  |  |  |  |  | 1girl, blue_sky, blush, cleavage, cloud, day, outdoors, solo, :3, beach, collarbone, large_breasts, looking_at_viewer, ocean, red_bikini, smile, bare_shoulders, closed_mouth, halterneck, string_bikini, thighs, barefoot, lying, navel, side-tie_bikini_bottom, water |
| 8 | 7 |  |  |  |  |  | 1girl, cloud, day, looking_at_viewer, outdoors, smile, solo, striped_bikini, blush, cleavage, hair_flower, navel, necklace, hibiscus, ocean, beach, blue_sky, bracelet, :3, collarbone, one_eye_closed, side-tie_bikini_bottom, sitting, water |
| 9 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, smile, solo, obi, floral_print, :3, hair_flower, open_mouth, petals, red_kimono |
| 10 | 9 |  |  |  |  |  | 1girl, hair_bow, long_sleeves, looking_at_viewer, ponytail, solo, white_gloves, blue_bow, belt, smile, cleavage, frills, striped, blue_coat, blush, brooch, buckle, closed_mouth, cowboy_shot, parted_lips, short_shorts, simple_background, white_background, white_bow |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | cleavage | simple_background | smile | solo | white_background | :3 | collarbone | looking_at_viewer | off_shoulder | open_clothes | shirt | bare_shoulders | closed_mouth | tank_top | long_sleeves | blue_shorts | camisole | large_breasts | strap_slip | upper_body | white_shirt | red_bowtie | school_uniform | jewelry | plaid_bowtie | collared_shirt | cardigan | dress_shirt | jacket | pleated_skirt | navel | bowtie | sleeves_past_wrists | red_bow | two_side_up | hair_bow | skirt | wrist_cuffs | midriff | heart | pearl_necklace | thighhighs | garter_straps | black_panties | open_mouth | black_thighhighs | elbow_gloves | white_dress | white_gloves | sleeveless_dress | choker | petals | high_heels | holding_flower | pink_dress | red_rose | ribbon | white_footwear | black_gloves | red_dress | striped_bow | brooch | neck_ribbon | red_ribbon | blue_sky | cloud | day | outdoors | beach | ocean | red_bikini | halterneck | string_bikini | thighs | barefoot | lying | side-tie_bikini_bottom | water | striped_bikini | hair_flower | necklace | hibiscus | bracelet | one_eye_closed | sitting | obi | floral_print | red_kimono | ponytail | blue_bow | belt | frills | striped | blue_coat | buckle | cowboy_shot | parted_lips | short_shorts | white_bow |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:-----------|:--------------------|:--------|:-------|:-------------------|:-----|:-------------|:--------------------|:---------------|:---------------|:--------|:-----------------|:---------------|:-----------|:---------------|:--------------|:-----------|:----------------|:-------------|:-------------|:--------------|:-------------|:-----------------|:----------|:---------------|:-----------------|:-----------|:--------------|:---------|:----------------|:--------|:---------|:----------------------|:----------|:--------------|:-----------|:--------|:--------------|:----------|:--------|:-----------------|:-------------|:----------------|:----------------|:-------------|:-------------------|:---------------|:--------------|:---------------|:-------------------|:---------|:---------|:-------------|:-----------------|:-------------|:-----------|:---------|:-----------------|:---------------|:------------|:--------------|:---------|:--------------|:-------------|:-----------|:--------|:------|:-----------|:--------|:--------|:-------------|:-------------|:----------------|:---------|:-----------|:--------|:-------------------------|:--------|:-----------------|:--------------|:-----------|:-----------|:-----------|:-----------------|:----------|:------|:---------------|:-------------|:-----------|:-----------|:-------|:---------|:----------|:------------|:---------|:--------------|:--------------|:---------------|:------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | X | | X | X | X | X | X | | X | X | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | X | X | X | X | X | | X | | X | | | | | | | | | | | X | | X | X | | | X | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 20 |  |  |  |  |  | X | X | X | | X | X | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | | X | X | | X | | X | | | | | | | | | | X | | | | | | X | | | | | | | | | | | X | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | X | X | | X | X | | X | X | X | | | | X | X | | | | | | | X | | | | X | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 18 |  |  |  |  |  | X | X | | X | X | X | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | X | | | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | X | X | | X | X | | X | X | X | | | | X | X | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 8 | 7 |  |  |  |  |  | X | X | X | | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | X | | | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | | | | | | | | | | | |
| 10 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/ichinose_shiki_idolmastercinderellagirls
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T13:25:03+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T11:22:03+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of ichinose\_shiki/一ノ瀬志希 (THE iDOLM@STER: Cinderella Girls)
===================================================================
This is the dataset of ichinose\_shiki/一ノ瀬志希 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are 'long\_hair, blue\_eyes, brown\_hair, breasts, bangs, wavy\_hair, ahoge, medium\_breasts, earrings, hair\_between\_eyes, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d6a5315fe7c21fffdbc71dba5637d3a641fb1284
|
# Dataset Card for the Noisy LibriSpeech dataset
## Dataset Description
- **Homepage:** Coming Soon
- **Repository:** https://huggingface.co/datasets/zhaoyang9425/NoisyLibriSpeechDataset-MUSAN
- **Paper:** Coming Soon
=- **Point of Contact:** [email protected]
### Dataset Summary
The noisy speech corpus is constructed by randomly sampling noise clips from the MUSAN noise dataset and adding them to LibriSpeech dataset.
The Signal-to-Noise Ratio (SNR) levels are sampled from a uniform distribution in 0 dB, 5 dB, 10 dB, 15 dB, and 20 dB.
## Dataset Structure
same structure with LibriSpeech dataset
|
zhaoyang9425/NoisyLibriSpeechDataset-MUSAN
|
[
"language:en",
"license:afl-3.0",
"read book",
"region:us"
] |
2023-09-11T13:31:43+00:00
|
{"language": ["en"], "license": "afl-3.0", "task_categories": ["noisy_speech_recognition"], "pretty_name": "NoisyLibriSpeech_MUSAN", "tags": ["read book"]}
|
2023-09-14T11:29:19+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-afl-3.0 #read book #region-us
|
# Dataset Card for the Noisy LibriSpeech dataset
## Dataset Description
- Homepage: Coming Soon
- Repository: URL
- Paper: Coming Soon
=- Point of Contact: zhaoyang9425@URL
### Dataset Summary
The noisy speech corpus is constructed by randomly sampling noise clips from the MUSAN noise dataset and adding them to LibriSpeech dataset.
The Signal-to-Noise Ratio (SNR) levels are sampled from a uniform distribution in 0 dB, 5 dB, 10 dB, 15 dB, and 20 dB.
## Dataset Structure
same structure with LibriSpeech dataset
|
[
"# Dataset Card for the Noisy LibriSpeech dataset",
"## Dataset Description\n\n- Homepage: Coming Soon\n- Repository: URL\n- Paper: Coming Soon\n=- Point of Contact: zhaoyang9425@URL",
"### Dataset Summary\n\nThe noisy speech corpus is constructed by randomly sampling noise clips from the MUSAN noise dataset and adding them to LibriSpeech dataset.\nThe Signal-to-Noise Ratio (SNR) levels are sampled from a uniform distribution in 0 dB, 5 dB, 10 dB, 15 dB, and 20 dB.",
"## Dataset Structure\n\nsame structure with LibriSpeech dataset"
] |
[
"TAGS\n#language-English #license-afl-3.0 #read book #region-us \n",
"# Dataset Card for the Noisy LibriSpeech dataset",
"## Dataset Description\n\n- Homepage: Coming Soon\n- Repository: URL\n- Paper: Coming Soon\n=- Point of Contact: zhaoyang9425@URL",
"### Dataset Summary\n\nThe noisy speech corpus is constructed by randomly sampling noise clips from the MUSAN noise dataset and adding them to LibriSpeech dataset.\nThe Signal-to-Noise Ratio (SNR) levels are sampled from a uniform distribution in 0 dB, 5 dB, 10 dB, 15 dB, and 20 dB.",
"## Dataset Structure\n\nsame structure with LibriSpeech dataset"
] |
[
21,
15,
38,
86,
15
] |
[
"passage: TAGS\n#language-English #license-afl-3.0 #read book #region-us \n# Dataset Card for the Noisy LibriSpeech dataset## Dataset Description\n\n- Homepage: Coming Soon\n- Repository: URL\n- Paper: Coming Soon\n=- Point of Contact: zhaoyang9425@URL### Dataset Summary\n\nThe noisy speech corpus is constructed by randomly sampling noise clips from the MUSAN noise dataset and adding them to LibriSpeech dataset.\nThe Signal-to-Noise Ratio (SNR) levels are sampled from a uniform distribution in 0 dB, 5 dB, 10 dB, 15 dB, and 20 dB.## Dataset Structure\n\nsame structure with LibriSpeech dataset"
] |
ec1e5034ed91cf9d29b09dc77ac34a1066c828ff
|
## EuroSAT-SAR: Land Use and Land Cover Classification with Sentinel-1
The EuroSAT-SAR dataset is a SAR version of the popular [EuroSAT](https://github.com/phelber/EuroSAT) dataset. We matched each Sentinel-2 image in EuroSAT with one Sentinel-1 patch according to the geospatial coordinates, ending up with 27,000 dual-pol Sentinel-1 SAR images divided in 10 classes. The EuroSAT-SAR dataset was collected as one downstream task in the work [FG-MAE](https://github.com/zhu-xlab/FGMAE) to serve as a CIFAR-like, clean, balanced ML-ready dataset for remote sensing SAR image recognition.
<p align="center">
<img width="1000" alt="fgmae main structure" src="assets/eurosat-sar.png">
</p>
The dataset can be downloaded as a compressed zip file [here](https://huggingface.co/datasets/wangyi111/EuroSAT-SAR/resolve/main/EuroSAT-SAR.zip).
### Citation
```bibtex
@article{wang2023feature,
title={Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing},
author={Wang, Yi and Hern{\'a}ndez, Hugo Hern{\'a}ndez and Albrecht, Conrad M and Zhu, Xiao Xiang},
journal={arXiv preprint arXiv:2310.18653},
year={2023}
}
```
|
wangyi111/EuroSAT-SAR
|
[
"task_categories:image-classification",
"license:mit",
"region:us"
] |
2023-09-11T13:46:42+00:00
|
{"license": "mit", "task_categories": ["image-classification"]}
|
2023-11-20T12:39:01+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #license-mit #region-us
|
## EuroSAT-SAR: Land Use and Land Cover Classification with Sentinel-1
The EuroSAT-SAR dataset is a SAR version of the popular EuroSAT dataset. We matched each Sentinel-2 image in EuroSAT with one Sentinel-1 patch according to the geospatial coordinates, ending up with 27,000 dual-pol Sentinel-1 SAR images divided in 10 classes. The EuroSAT-SAR dataset was collected as one downstream task in the work FG-MAE to serve as a CIFAR-like, clean, balanced ML-ready dataset for remote sensing SAR image recognition.
<p align="center">
<img width="1000" alt="fgmae main structure" src="assets/URL">
</p>
The dataset can be downloaded as a compressed zip file here.
|
[
"## EuroSAT-SAR: Land Use and Land Cover Classification with Sentinel-1\n\nThe EuroSAT-SAR dataset is a SAR version of the popular EuroSAT dataset. We matched each Sentinel-2 image in EuroSAT with one Sentinel-1 patch according to the geospatial coordinates, ending up with 27,000 dual-pol Sentinel-1 SAR images divided in 10 classes. The EuroSAT-SAR dataset was collected as one downstream task in the work FG-MAE to serve as a CIFAR-like, clean, balanced ML-ready dataset for remote sensing SAR image recognition.\n\n<p align=\"center\">\n <img width=\"1000\" alt=\"fgmae main structure\" src=\"assets/URL\">\n</p>\n\nThe dataset can be downloaded as a compressed zip file here."
] |
[
"TAGS\n#task_categories-image-classification #license-mit #region-us \n",
"## EuroSAT-SAR: Land Use and Land Cover Classification with Sentinel-1\n\nThe EuroSAT-SAR dataset is a SAR version of the popular EuroSAT dataset. We matched each Sentinel-2 image in EuroSAT with one Sentinel-1 patch according to the geospatial coordinates, ending up with 27,000 dual-pol Sentinel-1 SAR images divided in 10 classes. The EuroSAT-SAR dataset was collected as one downstream task in the work FG-MAE to serve as a CIFAR-like, clean, balanced ML-ready dataset for remote sensing SAR image recognition.\n\n<p align=\"center\">\n <img width=\"1000\" alt=\"fgmae main structure\" src=\"assets/URL\">\n</p>\n\nThe dataset can be downloaded as a compressed zip file here."
] |
[
22,
185
] |
[
"passage: TAGS\n#task_categories-image-classification #license-mit #region-us \n## EuroSAT-SAR: Land Use and Land Cover Classification with Sentinel-1\n\nThe EuroSAT-SAR dataset is a SAR version of the popular EuroSAT dataset. We matched each Sentinel-2 image in EuroSAT with one Sentinel-1 patch according to the geospatial coordinates, ending up with 27,000 dual-pol Sentinel-1 SAR images divided in 10 classes. The EuroSAT-SAR dataset was collected as one downstream task in the work FG-MAE to serve as a CIFAR-like, clean, balanced ML-ready dataset for remote sensing SAR image recognition.\n\n<p align=\"center\">\n <img width=\"1000\" alt=\"fgmae main structure\" src=\"assets/URL\">\n</p>\n\nThe dataset can be downloaded as a compressed zip file here."
] |
d80ca90ead08ce7327646c8b3d4168ab19d7352b
|
# Dataset Card for "librarian-bot-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
librarian-bot/librarian-bot-stats
|
[
"region:us"
] |
2023-09-11T13:56:07+00:00
|
{"dataset_info": {"features": [{"name": "createdAt", "dtype": "timestamp[us]"}, {"name": "pr_number", "dtype": "int64"}, {"name": "status", "dtype": "large_string"}, {"name": "repo_id", "dtype": "large_string"}, {"name": "type", "dtype": "large_string"}, {"name": "isPullRequest", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1489460, "num_examples": 16905}], "download_size": 537364, "dataset_size": 1489460}}
|
2024-02-05T09:30:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "librarian-bot-stats"
More Information needed
|
[
"# Dataset Card for \"librarian-bot-stats\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"librarian-bot-stats\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"librarian-bot-stats\"\n\nMore Information needed"
] |
e9583ff11d03d57172de124e0fa06ce57964e6eb
|
### Japanese Expressions Dataset from Human Rights Infringement on Internet
[権利侵害と不快さの間:日本語人権侵害表現データセット](https://zenodo.org/record/7960519) を HuggingFace datasets 向けに改変。
|
p1atdev/JEDHRI
|
[
"size_categories:n<1K",
"language:ja",
"license:cc-by-4.0",
"legal",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T14:01:39+00:00
|
{"language": ["ja"], "license": "cc-by-4.0", "size_categories": ["n<1K"], "tags": ["legal", "not-for-all-audiences"]}
|
2023-09-11T14:05:47+00:00
|
[] |
[
"ja"
] |
TAGS
#size_categories-n<1K #language-Japanese #license-cc-by-4.0 #legal #not-for-all-audiences #region-us
|
### Japanese Expressions Dataset from Human Rights Infringement on Internet
権利侵害と不快さの間:日本語人権侵害表現データセット を HuggingFace datasets 向けに改変。
|
[
"### Japanese Expressions Dataset from Human Rights Infringement on Internet\n\n権利侵害と不快さの間:日本語人権侵害表現データセット を HuggingFace datasets 向けに改変。"
] |
[
"TAGS\n#size_categories-n<1K #language-Japanese #license-cc-by-4.0 #legal #not-for-all-audiences #region-us \n",
"### Japanese Expressions Dataset from Human Rights Infringement on Internet\n\n権利侵害と不快さの間:日本語人権侵害表現データセット を HuggingFace datasets 向けに改変。"
] |
[
42,
47
] |
[
"passage: TAGS\n#size_categories-n<1K #language-Japanese #license-cc-by-4.0 #legal #not-for-all-audiences #region-us \n### Japanese Expressions Dataset from Human Rights Infringement on Internet\n\n権利侵害と不快さの間:日本語人権侵害表現データセット を HuggingFace datasets 向けに改変。"
] |
3bfd92131a0d8b795f23f3bc5a9de3dd95df143f
|
# Dataset Card for "AR-dotless-medium"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dot-ammar/AR-dotless-medium
|
[
"region:us"
] |
2023-09-11T14:11:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "clean", "dtype": "string"}, {"name": "dotless", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 400580815.31144565, "num_examples": 2274050}], "download_size": 228315577, "dataset_size": 400580815.31144565}}
|
2023-09-11T14:37:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AR-dotless-medium"
More Information needed
|
[
"# Dataset Card for \"AR-dotless-medium\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AR-dotless-medium\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AR-dotless-medium\"\n\nMore Information needed"
] |
ed40292681d2185c53352507baec0576fbee199a
|
# Dataset Card for Evaluation run of KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2](https://huggingface.co/KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KnutJaegersberg__megatron-gpt2-345m-evol_instruct_v2_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-06T21:23:09.566813](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__megatron-gpt2-345m-evol_instruct_v2_public/blob/main/results_2023-11-06T21-23-09.566813.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652191626,
"f1": 0.045425755033557384,
"f1_stderr": 0.0011926647330667845,
"acc": 0.2616416732438832,
"acc_stderr": 0.007018620654786821
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652191626,
"f1": 0.045425755033557384,
"f1_stderr": 0.0011926647330667845
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5232833464877664,
"acc_stderr": 0.014037241309573642
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_KnutJaegersberg__megatron-gpt2-345m-evol_instruct_v2
|
[
"region:us"
] |
2023-09-11T14:25:39+00:00
|
{"pretty_name": "Evaluation run of KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2](https://huggingface.co/KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KnutJaegersberg__megatron-gpt2-345m-evol_instruct_v2_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-06T21:23:09.566813](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__megatron-gpt2-345m-evol_instruct_v2_public/blob/main/results_2023-11-06T21-23-09.566813.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652191626,\n \"f1\": 0.045425755033557384,\n \"f1_stderr\": 0.0011926647330667845,\n \"acc\": 0.2616416732438832,\n \"acc_stderr\": 0.007018620654786821\n },\n \"harness|drop|3\": {\n \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652191626,\n \"f1\": 0.045425755033557384,\n \"f1_stderr\": 0.0011926647330667845\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5232833464877664,\n \"acc_stderr\": 0.014037241309573642\n }\n}\n```", "repo_url": "https://huggingface.co/KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_04T22_13_52.567889", "path": ["**/details_harness|drop|3_2023-11-04T22-13-52.567889.parquet"]}, {"split": "2023_11_06T21_23_09.566813", "path": ["**/details_harness|drop|3_2023-11-06T21-23-09.566813.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-06T21-23-09.566813.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_04T22_13_52.567889", "path": ["**/details_harness|gsm8k|5_2023-11-04T22-13-52.567889.parquet"]}, {"split": "2023_11_06T21_23_09.566813", "path": ["**/details_harness|gsm8k|5_2023-11-06T21-23-09.566813.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-06T21-23-09.566813.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_04T22_13_52.567889", "path": ["**/details_harness|winogrande|5_2023-11-04T22-13-52.567889.parquet"]}, {"split": "2023_11_06T21_23_09.566813", "path": ["**/details_harness|winogrande|5_2023-11-06T21-23-09.566813.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-06T21-23-09.566813.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_04T22_13_52.567889", "path": ["results_2023-11-04T22-13-52.567889.parquet"]}, {"split": "2023_11_06T21_23_09.566813", "path": ["results_2023-11-06T21-23-09.566813.parquet"]}, {"split": "latest", "path": ["results_2023-11-06T21-23-09.566813.parquet"]}]}]}
|
2023-12-01T14:06:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-06T21:23:09.566813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T21:23:09.566813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T21:23:09.566813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
181,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/megatron-gpt2-345m-evol_instruct_v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-06T21:23:09.566813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5c77e0135e4a25673183acf7a2af7c01fba2283a
|
# Dataset of helena_blavatsky/エレナ・ブラヴァツキー/海伦娜·布拉瓦茨基 (Fate/Grand Order)
This is the dataset of helena_blavatsky/エレナ・ブラヴァツキー/海伦娜·布拉瓦茨基 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `purple_hair, purple_eyes, short_hair, bangs, breasts, small_breasts, hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 676.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/helena_blavatsky_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 378.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/helena_blavatsky_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1287 | 853.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/helena_blavatsky_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 598.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/helena_blavatsky_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1287 | 1.18 GiB | [Download](https://huggingface.co/datasets/CyberHarem/helena_blavatsky_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/helena_blavatsky_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, black_bikini, looking_at_viewer, solo, ponytail, smile, bare_shoulders, blush, navel, simple_background, black_gloves, black_thighhighs, collarbone, hair_bow, white_background, garrison_cap, headphones, black_bow, closed_mouth |
| 1 | 22 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, solo, black_thighhighs, detached_sleeves, smile, belt, white_sleeves, blush, book, flat_chest, open_mouth, detached_collar, strapless_dress |
| 2 | 10 |  |  |  |  |  | 1girl, bare_shoulders, detached_collar, off_shoulder, open_coat, solo, beret, black_dress, blush, long_sleeves, looking_at_viewer, black_coat, black_headwear, belt, black_thighhighs, book, short_dress, strapless_dress, thighs, smile, open_mouth, simple_background, black_footwear, white_background |
| 3 | 16 |  |  |  |  |  | blue_coat, blush, fur-trimmed_coat, looking_at_viewer, 1girl, blue_dress, blue_gloves, blue_headwear, fur-trimmed_dress, long_sleeves, smile, solo, open_coat, red_bow, large_bow, beanie, badge, open_mouth, black_pantyhose, blue_footwear, hooded_coat, ankh, boots, brown_pantyhose, sack, belt |
| 4 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, open_mouth, bar_censor, nipples, nude, penis, solo_focus, thighhighs, ass, looking_at_viewer, sex, sweat, anus, maid_headdress, pov, ribbon, smile, vaginal, white_apron |
| 5 | 12 |  |  |  |  |  | 1girl, beret, long_sleeves, looking_at_viewer, solo, black_gloves, blush, smile, pink_skirt, hat_feather, white_shirt, hat_flower, red_vest, ascot, collared_shirt, open_mouth, striped_thighhighs, pink_thighhighs, red_flower, ankh, black_bow, one_eye_closed, puffy_sleeves, red_skirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_bikini | looking_at_viewer | solo | ponytail | smile | bare_shoulders | blush | navel | simple_background | black_gloves | black_thighhighs | collarbone | hair_bow | white_background | garrison_cap | headphones | black_bow | closed_mouth | detached_sleeves | belt | white_sleeves | book | flat_chest | open_mouth | detached_collar | strapless_dress | off_shoulder | open_coat | beret | black_dress | long_sleeves | black_coat | black_headwear | short_dress | thighs | black_footwear | blue_coat | fur-trimmed_coat | blue_dress | blue_gloves | blue_headwear | fur-trimmed_dress | red_bow | large_bow | beanie | badge | black_pantyhose | blue_footwear | hooded_coat | ankh | boots | brown_pantyhose | sack | 1boy | hetero | bar_censor | nipples | nude | penis | solo_focus | thighhighs | ass | sex | sweat | anus | maid_headdress | pov | ribbon | vaginal | white_apron | pink_skirt | hat_feather | white_shirt | hat_flower | red_vest | ascot | collared_shirt | striped_thighhighs | pink_thighhighs | red_flower | one_eye_closed | puffy_sleeves | red_skirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:-------|:-----------|:--------|:-----------------|:--------|:--------|:--------------------|:---------------|:-------------------|:-------------|:-----------|:-------------------|:---------------|:-------------|:------------|:---------------|:-------------------|:-------|:----------------|:-------|:-------------|:-------------|:------------------|:------------------|:---------------|:------------|:--------|:--------------|:---------------|:-------------|:-----------------|:--------------|:---------|:-----------------|:------------|:-------------------|:-------------|:--------------|:----------------|:--------------------|:----------|:------------|:---------|:--------|:------------------|:----------------|:--------------|:-------|:--------|:------------------|:-------|:-------|:---------|:-------------|:----------|:-------|:--------|:-------------|:-------------|:------|:------|:--------|:-------|:-----------------|:------|:---------|:----------|:--------------|:-------------|:--------------|:--------------|:-------------|:-----------|:--------|:-----------------|:---------------------|:------------------|:-------------|:-----------------|:----------------|:------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 22 |  |  |  |  |  | X | | X | X | | X | X | X | | | | X | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | X | X | | X | X | X | | X | | X | | | X | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | X | | X | X | | X | | X | | | | | | | | | | | | | X | | | | X | | | | X | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | X | | | X | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 5 | 12 |  |  |  |  |  | X | | X | X | | X | | X | | | X | | | | | | | X | | | | | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/helena_blavatsky_fgo
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T14:28:31+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-12T08:14:47+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of helena\_blavatsky/エレナ・ブラヴァツキー/海伦娜·布拉瓦茨基 (Fate/Grand Order)
=====================================================================
This is the dataset of helena\_blavatsky/エレナ・ブラヴァツキー/海伦娜·布拉瓦茨基 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are 'purple\_hair, purple\_eyes, short\_hair, bangs, breasts, small\_breasts, hat', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d4a4d6585ee5d277f422cb51b495ec3869df34a8
|
# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-7b-pretrain
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/YeungNLP/firefly-llama2-7b-pretrain
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [YeungNLP/firefly-llama2-7b-pretrain](https://huggingface.co/YeungNLP/firefly-llama2-7b-pretrain) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_YeungNLP__firefly-llama2-7b-pretrain",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T22:23:31.822890](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-llama2-7b-pretrain/blob/main/results_2023-10-24T22-23-31.822890.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0007340604026845638,
"em_stderr": 0.0002773614457335575,
"f1": 0.0473752097315439,
"f1_stderr": 0.0011829405023092946,
"acc": 0.36752358971812016,
"acc_stderr": 0.008870377138277116
},
"harness|drop|3": {
"em": 0.0007340604026845638,
"em_stderr": 0.0002773614457335575,
"f1": 0.0473752097315439,
"f1_stderr": 0.0011829405023092946
},
"harness|gsm8k|5": {
"acc": 0.032600454890068235,
"acc_stderr": 0.004891669021939581
},
"harness|winogrande|5": {
"acc": 0.7024467245461721,
"acc_stderr": 0.012849085254614652
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_YeungNLP__firefly-llama2-7b-pretrain
|
[
"region:us"
] |
2023-09-11T14:29:53+00:00
|
{"pretty_name": "Evaluation run of YeungNLP/firefly-llama2-7b-pretrain", "dataset_summary": "Dataset automatically created during the evaluation run of model [YeungNLP/firefly-llama2-7b-pretrain](https://huggingface.co/YeungNLP/firefly-llama2-7b-pretrain) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_YeungNLP__firefly-llama2-7b-pretrain\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T22:23:31.822890](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-llama2-7b-pretrain/blob/main/results_2023-10-24T22-23-31.822890.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0007340604026845638,\n \"em_stderr\": 0.0002773614457335575,\n \"f1\": 0.0473752097315439,\n \"f1_stderr\": 0.0011829405023092946,\n \"acc\": 0.36752358971812016,\n \"acc_stderr\": 0.008870377138277116\n },\n \"harness|drop|3\": {\n \"em\": 0.0007340604026845638,\n \"em_stderr\": 0.0002773614457335575,\n \"f1\": 0.0473752097315439,\n \"f1_stderr\": 0.0011829405023092946\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.032600454890068235,\n \"acc_stderr\": 0.004891669021939581\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7024467245461721,\n \"acc_stderr\": 0.012849085254614652\n }\n}\n```", "repo_url": "https://huggingface.co/YeungNLP/firefly-llama2-7b-pretrain", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T22_23_31.822890", "path": ["**/details_harness|drop|3_2023-10-24T22-23-31.822890.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T22-23-31.822890.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T22_23_31.822890", "path": ["**/details_harness|gsm8k|5_2023-10-24T22-23-31.822890.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T22-23-31.822890.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-29-37.507273.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-29-37.507273.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-29-37.507273.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T22_23_31.822890", "path": ["**/details_harness|winogrande|5_2023-10-24T22-23-31.822890.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T22-23-31.822890.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T15_29_37.507273", "path": ["results_2023-09-11T15-29-37.507273.parquet"]}, {"split": "2023_10_24T22_23_31.822890", "path": ["results_2023-10-24T22-23-31.822890.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T22-23-31.822890.parquet"]}]}]}
|
2023-10-24T21:23:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-7b-pretrain
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-7b-pretrain on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T22:23:31.822890(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-7b-pretrain",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-7b-pretrain on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T22:23:31.822890(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-7b-pretrain",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-7b-pretrain on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T22:23:31.822890(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-7b-pretrain## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-7b-pretrain on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T22:23:31.822890(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d73c74061d1cabbb93369484cd81b4a2a5484e4c
|
# Dataset of mao/マオ (Pokémon)
This is the dataset of mao/マオ (Pokémon), containing 500 images and their tags.
The core tags of this character are `green_hair, long_hair, dark_skin, dark-skinned_female, twintails, green_eyes, hair_ornament, hair_flower, breasts, bangs, swept_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 463.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mao_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 294.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mao_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1109 | 591.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mao_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 417.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mao_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1109 | 787.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mao_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mao_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, flower, overalls, open_mouth, solo, sleeveless, looking_at_viewer, simple_background, white_background, collarbone, pink_shirt, teeth, :d, hand_on_hip, standing, ;d, bare_arms, blush, cowboy_shot, holding_ladle, one_eye_closed |
| 1 | 15 |  |  |  |  |  | 1girl, flower, looking_at_viewer, open_mouth, overalls, solo, simple_background, white_background, armpits, :d, blush, upper_teeth_only |
| 2 | 6 |  |  |  |  |  | 1girl, flower, looking_at_viewer, smile, solo, upper_body, collarbone, large_breasts, nipples, simple_background, white_background, closed_mouth, nude, blush_stickers |
| 3 | 10 |  |  |  |  |  | 1girl, open_mouth, :d, overalls, pokemon_(creature), tongue, blush, pink_flower, closed_eyes, looking_at_viewer, pink_shirt, armpits, day, happy, shoes, upper_teeth_only |
| 4 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, navel, nipples, solo, bar_censor, barefoot, closed_mouth, collarbone, completely_nude, pink_flower, smile, female_pubic_hair, full_body, outline, pussy, sitting, spread_legs, arms_behind_back, hand_up, low_twintails, medium_breasts, one_eye_closed, shiny_skin, white_border |
| 5 | 14 |  |  |  |  |  | 1girl, eyelashes, hat, heart, open_mouth, tongue, :d, blush, buttons, long_sleeves, pokemon_(creature), holding, looking_at_viewer, upper_teeth_only, white_headwear, food, official_alternate_costume, waist_apron, solo |
| 6 | 24 |  |  |  |  |  | 1boy, 1girl, blush, hetero, solo_focus, nipples, penis, flower, sex, pussy, navel, open_mouth, vaginal, sweat, completely_nude, mosaic_censoring, spread_legs, looking_at_viewer, cum, medium_breasts, girl_on_top, on_back, smile |
| 7 | 7 |  |  |  |  |  | 1girl, detached_collar, playboy_bunny, rabbit_ears, strapless_leotard, fake_animal_ears, looking_at_viewer, pantyhose, smile, solo, wrist_cuffs, bare_shoulders, hand_on_hip, low_twintails, medium_breasts, cleavage, holding_tray, open_mouth, simple_background, standing, alternate_costume, ass, black_bowtie, blush, closed_mouth, from_behind, hairband, highleg_leotard, looking_back, one_eye_closed, pink_leotard, rabbit_tail, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | flower | overalls | open_mouth | solo | sleeveless | looking_at_viewer | simple_background | white_background | collarbone | pink_shirt | teeth | :d | hand_on_hip | standing | ;d | bare_arms | blush | cowboy_shot | holding_ladle | one_eye_closed | armpits | upper_teeth_only | smile | upper_body | large_breasts | nipples | closed_mouth | nude | blush_stickers | pokemon_(creature) | tongue | pink_flower | closed_eyes | day | happy | shoes | navel | bar_censor | barefoot | completely_nude | female_pubic_hair | full_body | outline | pussy | sitting | spread_legs | arms_behind_back | hand_up | low_twintails | medium_breasts | shiny_skin | white_border | eyelashes | hat | heart | buttons | long_sleeves | holding | white_headwear | food | official_alternate_costume | waist_apron | 1boy | hetero | solo_focus | penis | sex | vaginal | sweat | mosaic_censoring | cum | girl_on_top | on_back | detached_collar | playboy_bunny | rabbit_ears | strapless_leotard | fake_animal_ears | pantyhose | wrist_cuffs | bare_shoulders | cleavage | holding_tray | alternate_costume | ass | black_bowtie | from_behind | hairband | highleg_leotard | looking_back | pink_leotard | rabbit_tail |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-----------|:-------------|:-------|:-------------|:--------------------|:--------------------|:-------------------|:-------------|:-------------|:--------|:-----|:--------------|:-----------|:-----|:------------|:--------|:--------------|:----------------|:-----------------|:----------|:-------------------|:--------|:-------------|:----------------|:----------|:---------------|:-------|:-----------------|:---------------------|:---------|:--------------|:--------------|:------|:--------|:--------|:--------|:-------------|:-----------|:------------------|:--------------------|:------------|:----------|:--------|:----------|:--------------|:-------------------|:----------|:----------------|:-----------------|:-------------|:---------------|:------------|:------|:--------|:----------|:---------------|:----------|:-----------------|:-------|:-----------------------------|:--------------|:-------|:---------|:-------------|:--------|:------|:----------|:--------|:-------------------|:------|:--------------|:----------|:------------------|:----------------|:--------------|:--------------------|:-------------------|:------------|:--------------|:-----------------|:-----------|:---------------|:--------------------|:------|:---------------|:--------------|:-----------|:------------------|:---------------|:---------------|:--------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | | | X | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | | | X | | X | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | X | X | | | X | | | | X | | X | | | | | X | | | | X | X | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | | X | | X | | | X | | | | | | | | X | | | X | | | X | | | X | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 14 |  |  |  |  |  | X | | | X | X | | X | | | | | | X | | | | | X | | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 24 |  |  |  |  |  | X | X | | X | | | X | | | | | | | | | | | X | | | | | | X | | | X | | | | | | | | | | | X | | | X | | | | X | | X | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | X | X | | X | X | X | | | | | X | X | | | X | | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/mao_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T14:31:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T20:11:14+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of mao/マオ (Pokémon)
===========================
This is the dataset of mao/マオ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'green\_hair, long\_hair, dark\_skin, dark-skinned\_female, twintails, green\_eyes, hair\_ornament, hair\_flower, breasts, bangs, swept\_bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
b08162aa6e5c04c26a1a897eea14484feb86c649
|
# Dataset Card for "trinity-llama2-5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Shivansh2310/trinity-llama2-5k
|
[
"region:us"
] |
2023-09-11T14:32:08+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7899578, "num_examples": 5000}], "download_size": 4656368, "dataset_size": 7899578}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-11T14:32:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "trinity-llama2-5k"
More Information needed
|
[
"# Dataset Card for \"trinity-llama2-5k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"trinity-llama2-5k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"trinity-llama2-5k\"\n\nMore Information needed"
] |
123abcef02f53309ed72cb2e6ec6285199a9c9a9
|
# Dataset Card for "ethereal_fantasy_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/ethereal_fantasy_prompts
|
[
"region:us"
] |
2023-09-11T14:34:19+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2127, "num_examples": 10}], "download_size": 2331, "dataset_size": 2127}}
|
2023-09-11T14:34:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ethereal_fantasy_prompts"
More Information needed
|
[
"# Dataset Card for \"ethereal_fantasy_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ethereal_fantasy_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ethereal_fantasy_prompts\"\n\nMore Information needed"
] |
172bcc91fa35cc48abebc717a880343cd38165b1
|
# Dataset Card for "subtitles_general_train_set"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joaosanches/subtitles_general_train_set
|
[
"region:us"
] |
2023-09-11T14:35:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "meta", "struct": [{"name": "year", "dtype": "uint32"}, {"name": "imdbId", "dtype": "uint32"}, {"name": "subtitleId", "struct": [{"name": "pt", "dtype": "uint32"}, {"name": "pt_br", "dtype": "uint32"}]}, {"name": "sentenceIds", "struct": [{"name": "pt", "sequence": "uint32"}, {"name": "pt_br", "sequence": "uint32"}]}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14891339.273756089, "num_examples": 126984}], "download_size": 11684383, "dataset_size": 14891339.273756089}}
|
2023-09-11T14:35:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "subtitles_general_train_set"
More Information needed
|
[
"# Dataset Card for \"subtitles_general_train_set\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"subtitles_general_train_set\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"subtitles_general_train_set\"\n\nMore Information needed"
] |
f74e4cc103e867920ad6295c56dc3a4d923f298f
|
# Dataset Card for "trinity-dolly-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Shivansh2310/trinity-dolly-10k
|
[
"region:us"
] |
2023-09-11T14:36:54+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16392818, "num_examples": 10000}], "download_size": 10078470, "dataset_size": 16392818}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-11T14:36:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "trinity-dolly-10k"
More Information needed
|
[
"# Dataset Card for \"trinity-dolly-10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"trinity-dolly-10k\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"trinity-dolly-10k\"\n\nMore Information needed"
] |
bf172613e24371c427b55b896b0e52c4476c2018
|
This dataset is filtered version of [MMC4 Multimodal-C4 core fewer-faces dataset ](https://github.com/allenai/mmc4#corpus-stats-v11). It contains 144 474 pair of food image url and image caption.
All the code and model in the [repository](https://github.com/yusufani/text2food).
|
tum-nlp/text2food-mmc4
|
[
"task_categories:text-to-image",
"language:en",
"license:mit",
"food",
"region:us"
] |
2023-09-11T14:42:13+00:00
|
{"language": ["en"], "license": "mit", "task_categories": ["text-to-image"], "pretty_name": "text2food", "tags": ["food"]}
|
2023-11-30T09:27:03+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-to-image #language-English #license-mit #food #region-us
|
This dataset is filtered version of MMC4 Multimodal-C4 core fewer-faces dataset . It contains 144 474 pair of food image url and image caption.
All the code and model in the repository.
|
[] |
[
"TAGS\n#task_categories-text-to-image #language-English #license-mit #food #region-us \n"
] |
[
29
] |
[
"passage: TAGS\n#task_categories-text-to-image #language-English #license-mit #food #region-us \n"
] |
73e289eeba958458b8f80517859b2e9b8333b0bf
|
# Dataset Card for "line_art_drawing_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/line_art_drawing_prompts
|
[
"region:us"
] |
2023-09-11T14:43:08+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1552162, "num_examples": 10000}], "download_size": 216025, "dataset_size": 1552162}}
|
2023-09-11T14:43:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "line_art_drawing_prompts"
More Information needed
|
[
"# Dataset Card for \"line_art_drawing_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"line_art_drawing_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"line_art_drawing_prompts\"\n\nMore Information needed"
] |
a73c20596806114dd990a6d4a59f49c5681b0147
|
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/AIDC-ai-business/Marcoroni-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-13B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T15:05:14.072037](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-13B/blob/main/results_2023-09-18T15-05-14.072037.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5968939242056442,
"acc_stderr": 0.03397009205870784,
"acc_norm": 0.6007957237246586,
"acc_norm_stderr": 0.033948145854358645,
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5769635027861147,
"mc2_stderr": 0.015727623906231773
},
"harness|arc:challenge|25": {
"acc": 0.590443686006826,
"acc_stderr": 0.014370358632472447,
"acc_norm": 0.6245733788395904,
"acc_norm_stderr": 0.014150631435111726
},
"harness|hellaswag|10": {
"acc": 0.6366261700856403,
"acc_stderr": 0.004799882248494813,
"acc_norm": 0.8327026488747261,
"acc_norm_stderr": 0.003724783389253322
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5407407407407407,
"acc_stderr": 0.04304979692464242,
"acc_norm": 0.5407407407407407,
"acc_norm_stderr": 0.04304979692464242
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5921052631578947,
"acc_stderr": 0.039993097127774734,
"acc_norm": 0.5921052631578947,
"acc_norm_stderr": 0.039993097127774734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5962264150943396,
"acc_stderr": 0.030197611600197946,
"acc_norm": 0.5962264150943396,
"acc_norm_stderr": 0.030197611600197946
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6597222222222222,
"acc_stderr": 0.039621355734862175,
"acc_norm": 0.6597222222222222,
"acc_norm_stderr": 0.039621355734862175
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.04655010411319616,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.04655010411319616
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.04560480215720685,
"acc_norm": 0.71,
"acc_norm_stderr": 0.04560480215720685
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.03267151848924777,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.03267151848924777
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.39473684210526316,
"acc_stderr": 0.045981880578165414,
"acc_norm": 0.39473684210526316,
"acc_norm_stderr": 0.045981880578165414
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.36243386243386244,
"acc_stderr": 0.02475747390275206,
"acc_norm": 0.36243386243386244,
"acc_norm_stderr": 0.02475747390275206
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.043435254289490965,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.043435254289490965
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6580645161290323,
"acc_stderr": 0.026985289576552742,
"acc_norm": 0.6580645161290323,
"acc_norm_stderr": 0.026985289576552742
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7151515151515152,
"acc_stderr": 0.035243908445117815,
"acc_norm": 0.7151515151515152,
"acc_norm_stderr": 0.035243908445117815
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.025033870583015178,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.025033870583015178
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6076923076923076,
"acc_stderr": 0.02475600038213095,
"acc_norm": 0.6076923076923076,
"acc_norm_stderr": 0.02475600038213095
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.028578348365473072,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.028578348365473072
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5966386554621849,
"acc_stderr": 0.031866081214088314,
"acc_norm": 0.5966386554621849,
"acc_norm_stderr": 0.031866081214088314
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7908256880733945,
"acc_stderr": 0.01743793717334323,
"acc_norm": 0.7908256880733945,
"acc_norm_stderr": 0.01743793717334323
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4212962962962963,
"acc_stderr": 0.03367462138896079,
"acc_norm": 0.4212962962962963,
"acc_norm_stderr": 0.03367462138896079
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.02552472232455333,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.02552472232455333
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.0283046579430353,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.0283046579430353
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6946564885496184,
"acc_stderr": 0.040393149787245605,
"acc_norm": 0.6946564885496184,
"acc_norm_stderr": 0.040393149787245605
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7355371900826446,
"acc_stderr": 0.040261875275912073,
"acc_norm": 0.7355371900826446,
"acc_norm_stderr": 0.040261875275912073
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.03826076324884865,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.03826076324884865
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6932515337423313,
"acc_stderr": 0.036230899157241474,
"acc_norm": 0.6932515337423313,
"acc_norm_stderr": 0.036230899157241474
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.7378640776699029,
"acc_stderr": 0.04354631077260595,
"acc_norm": 0.7378640776699029,
"acc_norm_stderr": 0.04354631077260595
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8205128205128205,
"acc_stderr": 0.025140935950335445,
"acc_norm": 0.8205128205128205,
"acc_norm_stderr": 0.025140935950335445
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7994891443167306,
"acc_stderr": 0.014317653708594207,
"acc_norm": 0.7994891443167306,
"acc_norm_stderr": 0.014317653708594207
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879716,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.47262569832402235,
"acc_stderr": 0.016697420650642752,
"acc_norm": 0.47262569832402235,
"acc_norm_stderr": 0.016697420650642752
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.027057974624494382,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.027057974624494382
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6752411575562701,
"acc_stderr": 0.026596782287697043,
"acc_norm": 0.6752411575562701,
"acc_norm_stderr": 0.026596782287697043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495026,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495026
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45241199478487615,
"acc_stderr": 0.012712265105889136,
"acc_norm": 0.45241199478487615,
"acc_norm_stderr": 0.012712265105889136
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5882352941176471,
"acc_stderr": 0.029896163033125468,
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.029896163033125468
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5866013071895425,
"acc_stderr": 0.019922115682786685,
"acc_norm": 0.5866013071895425,
"acc_norm_stderr": 0.019922115682786685
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6816326530612244,
"acc_stderr": 0.029822533793982055,
"acc_norm": 0.6816326530612244,
"acc_norm_stderr": 0.029822533793982055
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7562189054726368,
"acc_stderr": 0.030360490154014645,
"acc_norm": 0.7562189054726368,
"acc_norm_stderr": 0.030360490154014645
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5769635027861147,
"mc2_stderr": 0.015727623906231773
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-13B
|
[
"region:us"
] |
2023-09-11T14:45:45+00:00
|
{"pretty_name": "Evaluation run of AIDC-ai-business/Marcoroni-13B", "dataset_summary": "Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-13B\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-18T15:05:14.072037](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-13B/blob/main/results_2023-09-18T15-05-14.072037.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5968939242056442,\n \"acc_stderr\": 0.03397009205870784,\n \"acc_norm\": 0.6007957237246586,\n \"acc_norm_stderr\": 0.033948145854358645,\n \"mc1\": 0.4186046511627907,\n \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5769635027861147,\n \"mc2_stderr\": 0.015727623906231773\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.590443686006826,\n \"acc_stderr\": 0.014370358632472447,\n \"acc_norm\": 0.6245733788395904,\n \"acc_norm_stderr\": 0.014150631435111726\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6366261700856403,\n \"acc_stderr\": 0.004799882248494813,\n \"acc_norm\": 0.8327026488747261,\n \"acc_norm_stderr\": 0.003724783389253322\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5407407407407407,\n \"acc_stderr\": 0.04304979692464242,\n \"acc_norm\": 0.5407407407407407,\n \"acc_norm_stderr\": 0.04304979692464242\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5921052631578947,\n \"acc_stderr\": 0.039993097127774734,\n \"acc_norm\": 0.5921052631578947,\n \"acc_norm_stderr\": 0.039993097127774734\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5962264150943396,\n \"acc_stderr\": 0.030197611600197946,\n \"acc_norm\": 0.5962264150943396,\n \"acc_norm_stderr\": 0.030197611600197946\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6597222222222222,\n \"acc_stderr\": 0.039621355734862175,\n \"acc_norm\": 0.6597222222222222,\n \"acc_norm_stderr\": 0.039621355734862175\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.04655010411319616,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.04655010411319616\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720685,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720685\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.03267151848924777,\n \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.03267151848924777\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.39473684210526316,\n \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.39473684210526316,\n \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36243386243386244,\n \"acc_stderr\": 0.02475747390275206,\n \"acc_norm\": 0.36243386243386244,\n \"acc_norm_stderr\": 0.02475747390275206\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.043435254289490965,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.043435254289490965\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6580645161290323,\n \"acc_stderr\": 0.026985289576552742,\n \"acc_norm\": 0.6580645161290323,\n \"acc_norm_stderr\": 0.026985289576552742\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.035243908445117815,\n \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.035243908445117815\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015178,\n \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.02475600038213095,\n \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.02475600038213095\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.028578348365473072,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.028578348365473072\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5966386554621849,\n \"acc_stderr\": 0.031866081214088314,\n \"acc_norm\": 0.5966386554621849,\n \"acc_norm_stderr\": 0.031866081214088314\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7908256880733945,\n \"acc_stderr\": 0.01743793717334323,\n \"acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.01743793717334323\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4212962962962963,\n \"acc_stderr\": 0.03367462138896079,\n \"acc_norm\": 0.4212962962962963,\n \"acc_norm_stderr\": 0.03367462138896079\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455333,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455333\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7468354430379747,\n \"acc_stderr\": 0.0283046579430353,\n \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.0283046579430353\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.040393149787245605,\n \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.040393149787245605\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.040261875275912073,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.040261875275912073\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.03826076324884865,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.03826076324884865\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6932515337423313,\n \"acc_stderr\": 0.036230899157241474,\n \"acc_norm\": 0.6932515337423313,\n \"acc_norm_stderr\": 0.036230899157241474\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8205128205128205,\n \"acc_stderr\": 0.025140935950335445,\n \"acc_norm\": 0.8205128205128205,\n \"acc_norm_stderr\": 0.025140935950335445\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7994891443167306,\n \"acc_stderr\": 0.014317653708594207,\n \"acc_norm\": 0.7994891443167306,\n \"acc_norm_stderr\": 0.014317653708594207\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879716,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879716\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.47262569832402235,\n \"acc_stderr\": 0.016697420650642752,\n \"acc_norm\": 0.47262569832402235,\n \"acc_norm_stderr\": 0.016697420650642752\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6633986928104575,\n \"acc_stderr\": 0.027057974624494382,\n \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.027057974624494382\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495026,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495026\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45241199478487615,\n \"acc_stderr\": 0.012712265105889136,\n \"acc_norm\": 0.45241199478487615,\n \"acc_norm_stderr\": 0.012712265105889136\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.029896163033125468,\n \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.029896163033125468\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5866013071895425,\n \"acc_stderr\": 0.019922115682786685,\n \"acc_norm\": 0.5866013071895425,\n \"acc_norm_stderr\": 0.019922115682786685\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982055,\n \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982055\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7562189054726368,\n \"acc_stderr\": 0.030360490154014645,\n \"acc_norm\": 0.7562189054726368,\n \"acc_norm_stderr\": 0.030360490154014645\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4186046511627907,\n \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5769635027861147,\n \"mc2_stderr\": 0.015727623906231773\n }\n}\n```", "repo_url": "https://huggingface.co/AIDC-ai-business/Marcoroni-13B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|arc:challenge|25_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hellaswag|10_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-45-30.030837.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-18T15-05-14.072037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-18T15-05-14.072037.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T15_45_30.030837", "path": ["results_2023-09-11T15-45-30.030837.parquet"]}, {"split": "2023_09_18T15_05_14.072037", "path": ["results_2023-09-18T15-05-14.072037.parquet"]}, {"split": "latest", "path": ["results_2023-09-18T15-05-14.072037.parquet"]}]}]}
|
2023-09-18T14:06:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-13B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-13B on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-18T15:05:14.072037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-18T15:05:14.072037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-18T15:05:14.072037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-13B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-18T15:05:14.072037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ed62f3452bcc6e3039ceaa70ac0cb3b38b7170c1
|
枪神纪吧20年以来的数据由chatgpt3.5生成,并由人工从20万条精简到5万条,预计还可以精简到3万条,但懒得看了
ernie_dataset.jsonl 是文心一言 含排序的数据集,计算问题相似性 阈值0.7,相似的问题进行组合
|
ancss/QSJ_dataset
|
[
"license:mit",
"region:us"
] |
2023-09-11T14:47:05+00:00
|
{"license": "mit"}
|
2023-09-13T06:39:50+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
枪神纪吧20年以来的数据由chatgpt3.5生成,并由人工从20万条精简到5万条,预计还可以精简到3万条,但懒得看了
ernie_dataset.jsonl 是文心一言 含排序的数据集,计算问题相似性 阈值0.7,相似的问题进行组合
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
760a6f4d101c7bd7c269cadb1fd8f5250a14071c
|
# Dataset Card for "pubmed-200k-rct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pietrolesci/pubmed-200k-rct
|
[
"region:us"
] |
2023-09-11T14:48:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "embedding_all-MiniLM-L12-v2", "data_files": [{"split": "train", "path": "embedding_all-MiniLM-L12-v2/train-*"}, {"split": "validation", "path": "embedding_all-MiniLM-L12-v2/validation-*"}, {"split": "test", "path": "embedding_all-MiniLM-L12-v2/test-*"}]}, {"config_name": "embedding_all-mpnet-base-v2", "data_files": [{"split": "train", "path": "embedding_all-mpnet-base-v2/train-*"}, {"split": "validation", "path": "embedding_all-mpnet-base-v2/validation-*"}, {"split": "test", "path": "embedding_all-mpnet-base-v2/test-*"}]}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "data_files": [{"split": "train", "path": "embedding_multi-qa-mpnet-base-dot-v1/train-*"}, {"split": "validation", "path": "embedding_multi-qa-mpnet-base-dot-v1/validation-*"}, {"split": "test", "path": "embedding_multi-qa-mpnet-base-dot-v1/test-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "BACKGROUND", "1": "CONCLUSIONS", "2": "METHODS", "3": "OBJECTIVE", "4": "RESULTS"}}}}, {"name": "text", "dtype": "string"}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 379382835, "num_examples": 2211861}, {"name": "validation", "num_bytes": 4994899, "num_examples": 28932}, {"name": "test", "num_bytes": 5026344, "num_examples": 29493}], "download_size": 209039426, "dataset_size": 389404078}, {"config_name": "embedding_all-MiniLM-L12-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-MiniLM-L12-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 3423960828, "num_examples": 2211861}, {"name": "validation", "num_bytes": 44786736, "num_examples": 28932}, {"name": "test", "num_bytes": 45655164, "num_examples": 29493}], "download_size": 4916495311, "dataset_size": 3514402728}, {"config_name": "embedding_all-mpnet-base-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 6821379324, "num_examples": 2211861}, {"name": "validation", "num_bytes": 89226288, "num_examples": 28932}, {"name": "test", "num_bytes": 90956412, "num_examples": 29493}], "download_size": 8405313596, "dataset_size": 7001562024}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_multi-qa-mpnet-base-dot-v1", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 6821379324, "num_examples": 2211861}, {"name": "validation", "num_bytes": 89226288, "num_examples": 28932}, {"name": "test", "num_bytes": 90956412, "num_examples": 29493}], "download_size": 8405286790, "dataset_size": 7001562024}]}
|
2023-09-11T15:14:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed-200k-rct"
More Information needed
|
[
"# Dataset Card for \"pubmed-200k-rct\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed-200k-rct\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed-200k-rct\"\n\nMore Information needed"
] |
bb47a15cc7d5f2ba9e503d02c3bd6f2f6d9e1b3f
|
# Dataset Card for Evaluation run of lgaalves/gpt2_platypus-camel_physics
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lgaalves/gpt2_platypus-camel_physics
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [lgaalves/gpt2_platypus-camel_physics](https://huggingface.co/lgaalves/gpt2_platypus-camel_physics) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lgaalves__gpt2_platypus-camel_physics",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T13:50:32.288438](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2_platypus-camel_physics/blob/main/results_2023-10-25T13-50-32.288438.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002307046979865772,
"em_stderr": 0.0004913221265094493,
"f1": 0.04785339765100675,
"f1_stderr": 0.001366270058429369,
"acc": 0.24822415153906865,
"acc_stderr": 0.007026065573457936
},
"harness|drop|3": {
"em": 0.002307046979865772,
"em_stderr": 0.0004913221265094493,
"f1": 0.04785339765100675,
"f1_stderr": 0.001366270058429369
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.4964483030781373,
"acc_stderr": 0.014052131146915873
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_lgaalves__gpt2_platypus-camel_physics
|
[
"region:us"
] |
2023-09-11T14:51:35+00:00
|
{"pretty_name": "Evaluation run of lgaalves/gpt2_platypus-camel_physics", "dataset_summary": "Dataset automatically created during the evaluation run of model [lgaalves/gpt2_platypus-camel_physics](https://huggingface.co/lgaalves/gpt2_platypus-camel_physics) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lgaalves__gpt2_platypus-camel_physics\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T13:50:32.288438](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2_platypus-camel_physics/blob/main/results_2023-10-25T13-50-32.288438.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002307046979865772,\n \"em_stderr\": 0.0004913221265094493,\n \"f1\": 0.04785339765100675,\n \"f1_stderr\": 0.001366270058429369,\n \"acc\": 0.24822415153906865,\n \"acc_stderr\": 0.007026065573457936\n },\n \"harness|drop|3\": {\n \"em\": 0.002307046979865772,\n \"em_stderr\": 0.0004913221265094493,\n \"f1\": 0.04785339765100675,\n \"f1_stderr\": 0.001366270058429369\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4964483030781373,\n \"acc_stderr\": 0.014052131146915873\n }\n}\n```", "repo_url": "https://huggingface.co/lgaalves/gpt2_platypus-camel_physics", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T13_50_32.288438", "path": ["**/details_harness|drop|3_2023-10-25T13-50-32.288438.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T13-50-32.288438.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T13_50_32.288438", "path": ["**/details_harness|gsm8k|5_2023-10-25T13-50-32.288438.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T13-50-32.288438.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-51-24.784876.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-51-24.784876.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-51-24.784876.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T13_50_32.288438", "path": ["**/details_harness|winogrande|5_2023-10-25T13-50-32.288438.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T13-50-32.288438.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T15_51_24.784876", "path": ["results_2023-09-11T15-51-24.784876.parquet"]}, {"split": "2023_10_25T13_50_32.288438", "path": ["results_2023-10-25T13-50-32.288438.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T13-50-32.288438.parquet"]}]}]}
|
2023-10-25T12:50:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of lgaalves/gpt2_platypus-camel_physics
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model lgaalves/gpt2_platypus-camel_physics on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T13:50:32.288438(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of lgaalves/gpt2_platypus-camel_physics",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt2_platypus-camel_physics on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T13:50:32.288438(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lgaalves/gpt2_platypus-camel_physics",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt2_platypus-camel_physics on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T13:50:32.288438(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lgaalves/gpt2_platypus-camel_physics## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt2_platypus-camel_physics on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T13:50:32.288438(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
237ae98dce44614a26a0482a5c7e3ffb72128ba7
|
# Dataset Card for Evaluation run of lgaalves/gpt2_camel_physics-platypus
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lgaalves/gpt2_camel_physics-platypus
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [lgaalves/gpt2_camel_physics-platypus](https://huggingface.co/lgaalves/gpt2_camel_physics-platypus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lgaalves__gpt2_camel_physics-platypus",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T17:38:39.020163](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2_camel_physics-platypus/blob/main/results_2023-10-25T17-38-39.020163.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002307046979865772,
"em_stderr": 0.0004913221265094493,
"f1": 0.04785339765100675,
"f1_stderr": 0.001366270058429369,
"acc": 0.24822415153906865,
"acc_stderr": 0.007026065573457936
},
"harness|drop|3": {
"em": 0.002307046979865772,
"em_stderr": 0.0004913221265094493,
"f1": 0.04785339765100675,
"f1_stderr": 0.001366270058429369
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.4964483030781373,
"acc_stderr": 0.014052131146915873
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_lgaalves__gpt2_camel_physics-platypus
|
[
"region:us"
] |
2023-09-11T14:53:14+00:00
|
{"pretty_name": "Evaluation run of lgaalves/gpt2_camel_physics-platypus", "dataset_summary": "Dataset automatically created during the evaluation run of model [lgaalves/gpt2_camel_physics-platypus](https://huggingface.co/lgaalves/gpt2_camel_physics-platypus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lgaalves__gpt2_camel_physics-platypus\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T17:38:39.020163](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2_camel_physics-platypus/blob/main/results_2023-10-25T17-38-39.020163.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002307046979865772,\n \"em_stderr\": 0.0004913221265094493,\n \"f1\": 0.04785339765100675,\n \"f1_stderr\": 0.001366270058429369,\n \"acc\": 0.24822415153906865,\n \"acc_stderr\": 0.007026065573457936\n },\n \"harness|drop|3\": {\n \"em\": 0.002307046979865772,\n \"em_stderr\": 0.0004913221265094493,\n \"f1\": 0.04785339765100675,\n \"f1_stderr\": 0.001366270058429369\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4964483030781373,\n \"acc_stderr\": 0.014052131146915873\n }\n}\n```", "repo_url": "https://huggingface.co/lgaalves/gpt2_camel_physics-platypus", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T17_38_39.020163", "path": ["**/details_harness|drop|3_2023-10-25T17-38-39.020163.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T17-38-39.020163.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T17_38_39.020163", "path": ["**/details_harness|gsm8k|5_2023-10-25T17-38-39.020163.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T17-38-39.020163.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T15-53-04.413591.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-53-04.413591.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T15-53-04.413591.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T17_38_39.020163", "path": ["**/details_harness|winogrande|5_2023-10-25T17-38-39.020163.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T17-38-39.020163.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T15_53_04.413591", "path": ["results_2023-09-11T15-53-04.413591.parquet"]}, {"split": "2023_10_25T17_38_39.020163", "path": ["results_2023-10-25T17-38-39.020163.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T17-38-39.020163.parquet"]}]}]}
|
2023-10-25T16:38:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of lgaalves/gpt2_camel_physics-platypus
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model lgaalves/gpt2_camel_physics-platypus on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T17:38:39.020163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of lgaalves/gpt2_camel_physics-platypus",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt2_camel_physics-platypus on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T17:38:39.020163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lgaalves/gpt2_camel_physics-platypus",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt2_camel_physics-platypus on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T17:38:39.020163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lgaalves/gpt2_camel_physics-platypus## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt2_camel_physics-platypus on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T17:38:39.020163(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d9dde62c2c7da156972bb62fac9e96f155f10134
|
# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-13b-pretrain
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/YeungNLP/firefly-llama2-13b-pretrain
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [YeungNLP/firefly-llama2-13b-pretrain](https://huggingface.co/YeungNLP/firefly-llama2-13b-pretrain) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_YeungNLP__firefly-llama2-13b-pretrain",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T13:27:01.692848](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-llama2-13b-pretrain/blob/main/results_2023-10-28T13-27-01.692848.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493673,
"f1": 0.06274223993288629,
"f1_stderr": 0.0013975551378027755,
"acc": 0.42049925411671923,
"acc_stderr": 0.009895672255021266
},
"harness|drop|3": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493673,
"f1": 0.06274223993288629,
"f1_stderr": 0.0013975551378027755
},
"harness|gsm8k|5": {
"acc": 0.08567096285064443,
"acc_stderr": 0.007709218855882792
},
"harness|winogrande|5": {
"acc": 0.755327545382794,
"acc_stderr": 0.012082125654159738
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_YeungNLP__firefly-llama2-13b-pretrain
|
[
"region:us"
] |
2023-09-11T15:09:17+00:00
|
{"pretty_name": "Evaluation run of YeungNLP/firefly-llama2-13b-pretrain", "dataset_summary": "Dataset automatically created during the evaluation run of model [YeungNLP/firefly-llama2-13b-pretrain](https://huggingface.co/YeungNLP/firefly-llama2-13b-pretrain) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_YeungNLP__firefly-llama2-13b-pretrain\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T13:27:01.692848](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-llama2-13b-pretrain/blob/main/results_2023-10-28T13-27-01.692848.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493673,\n \"f1\": 0.06274223993288629,\n \"f1_stderr\": 0.0013975551378027755,\n \"acc\": 0.42049925411671923,\n \"acc_stderr\": 0.009895672255021266\n },\n \"harness|drop|3\": {\n \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493673,\n \"f1\": 0.06274223993288629,\n \"f1_stderr\": 0.0013975551378027755\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08567096285064443,\n \"acc_stderr\": 0.007709218855882792\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.755327545382794,\n \"acc_stderr\": 0.012082125654159738\n }\n}\n```", "repo_url": "https://huggingface.co/YeungNLP/firefly-llama2-13b-pretrain", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|arc:challenge|25_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T13_27_01.692848", "path": ["**/details_harness|drop|3_2023-10-28T13-27-01.692848.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T13-27-01.692848.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T13_27_01.692848", "path": ["**/details_harness|gsm8k|5_2023-10-28T13-27-01.692848.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T13-27-01.692848.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hellaswag|10_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T16-09-00.658603.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T16-09-00.658603.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T16-09-00.658603.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T13_27_01.692848", "path": ["**/details_harness|winogrande|5_2023-10-28T13-27-01.692848.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T13-27-01.692848.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T16_09_00.658603", "path": ["results_2023-09-11T16-09-00.658603.parquet"]}, {"split": "2023_10_28T13_27_01.692848", "path": ["results_2023-10-28T13-27-01.692848.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T13-27-01.692848.parquet"]}]}]}
|
2023-10-28T12:27:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-13b-pretrain
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-13b-pretrain on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T13:27:01.692848(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-13b-pretrain",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-13b-pretrain on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T13:27:01.692848(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-13b-pretrain",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-13b-pretrain on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T13:27:01.692848(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of YeungNLP/firefly-llama2-13b-pretrain## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model YeungNLP/firefly-llama2-13b-pretrain on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T13:27:01.692848(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
43bcf901ca27589f8936e1889676c717ddbd7d81
|
# V0
使用的数据为直接将所有的bbox编码为一句话然后送进去LLM,需要模型根据输入来直接回归出所有的数字
# V1
V0基础上加入了类别提示
# V2/V2_normalized
使用的数据为对应的类别和bbox,但是没有直接变为token,而是需要在程序内部将bbox坐标编码为special token作为回归的对象
# V3_normalized
在V2_normalized数据的基础之上在question中添加了全部的类别信息以及对应token的映射map,所有的都一样只是进行了乱序
# V4
基于scannet_detection_train收集得到的数据,没有经过归一化处理,以后归一化处理全部在程序中进行,加上local guidance
# V5_normalized
在V4_normalized数据的基础之上将原始的box信息表示为 close to center 和 far from center
目前主要使用V4作为主要实验数据
点运数据使用scannet_detection_train为可视化没有发生偏移的数据
|
hmxiong/ScanNet-Detection-Instruction
|
[
"region:us"
] |
2023-09-11T15:10:02+00:00
|
{}
|
2023-10-10T10:11:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# V0
使用的数据为直接将所有的bbox编码为一句话然后送进去LLM,需要模型根据输入来直接回归出所有的数字
# V1
V0基础上加入了类别提示
# V2/V2_normalized
使用的数据为对应的类别和bbox,但是没有直接变为token,而是需要在程序内部将bbox坐标编码为special token作为回归的对象
# V3_normalized
在V2_normalized数据的基础之上在question中添加了全部的类别信息以及对应token的映射map,所有的都一样只是进行了乱序
# V4
基于scannet_detection_train收集得到的数据,没有经过归一化处理,以后归一化处理全部在程序中进行,加上local guidance
# V5_normalized
在V4_normalized数据的基础之上将原始的box信息表示为 close to center 和 far from center
目前主要使用V4作为主要实验数据
点运数据使用scannet_detection_train为可视化没有发生偏移的数据
|
[
"# V0\n使用的数据为直接将所有的bbox编码为一句话然后送进去LLM,需要模型根据输入来直接回归出所有的数字",
"# V1\nV0基础上加入了类别提示",
"# V2/V2_normalized\n使用的数据为对应的类别和bbox,但是没有直接变为token,而是需要在程序内部将bbox坐标编码为special token作为回归的对象",
"# V3_normalized\n在V2_normalized数据的基础之上在question中添加了全部的类别信息以及对应token的映射map,所有的都一样只是进行了乱序",
"# V4\n基于scannet_detection_train收集得到的数据,没有经过归一化处理,以后归一化处理全部在程序中进行,加上local guidance",
"# V5_normalized\n在V4_normalized数据的基础之上将原始的box信息表示为 close to center 和 far from center\n\n目前主要使用V4作为主要实验数据\n\n点运数据使用scannet_detection_train为可视化没有发生偏移的数据"
] |
[
"TAGS\n#region-us \n",
"# V0\n使用的数据为直接将所有的bbox编码为一句话然后送进去LLM,需要模型根据输入来直接回归出所有的数字",
"# V1\nV0基础上加入了类别提示",
"# V2/V2_normalized\n使用的数据为对应的类别和bbox,但是没有直接变为token,而是需要在程序内部将bbox坐标编码为special token作为回归的对象",
"# V3_normalized\n在V2_normalized数据的基础之上在question中添加了全部的类别信息以及对应token的映射map,所有的都一样只是进行了乱序",
"# V4\n基于scannet_detection_train收集得到的数据,没有经过归一化处理,以后归一化处理全部在程序中进行,加上local guidance",
"# V5_normalized\n在V4_normalized数据的基础之上将原始的box信息表示为 close to center 和 far from center\n\n目前主要使用V4作为主要实验数据\n\n点运数据使用scannet_detection_train为可视化没有发生偏移的数据"
] |
[
6,
32,
11,
47,
43,
39,
62
] |
[
"passage: TAGS\n#region-us \n# V0\n使用的数据为直接将所有的bbox编码为一句话然后送进去LLM,需要模型根据输入来直接回归出所有的数字# V1\nV0基础上加入了类别提示# V2/V2_normalized\n使用的数据为对应的类别和bbox,但是没有直接变为token,而是需要在程序内部将bbox坐标编码为special token作为回归的对象# V3_normalized\n在V2_normalized数据的基础之上在question中添加了全部的类别信息以及对应token的映射map,所有的都一样只是进行了乱序# V4\n基于scannet_detection_train收集得到的数据,没有经过归一化处理,以后归一化处理全部在程序中进行,加上local guidance# V5_normalized\n在V4_normalized数据的基础之上将原始的box信息表示为 close to center 和 far from center\n\n目前主要使用V4作为主要实验数据\n\n点运数据使用scannet_detection_train为可视化没有发生偏移的数据"
] |
c5973fc3f0c2270335a9cf53672938257f55c6cc
|
# Dataset Card for Evaluation run of NekoPunchBBB/Llama-2-13b-hf_Open-Platypus
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NekoPunchBBB/Llama-2-13b-hf_Open-Platypus
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NekoPunchBBB/Llama-2-13b-hf_Open-Platypus](https://huggingface.co/NekoPunchBBB/Llama-2-13b-hf_Open-Platypus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NekoPunchBBB__Llama-2-13b-hf_Open-Platypus",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T23:55:29.524806](https://huggingface.co/datasets/open-llm-leaderboard/details_NekoPunchBBB__Llama-2-13b-hf_Open-Platypus/blob/main/results_2023-10-28T23-55-29.524806.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.05985213926174496,
"f1_stderr": 0.0013641672120704657,
"acc": 0.4325617395685546,
"acc_stderr": 0.009923090021448928
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.05985213926174496,
"f1_stderr": 0.0013641672120704657
},
"harness|gsm8k|5": {
"acc": 0.09401061410159212,
"acc_stderr": 0.00803881981887246
},
"harness|winogrande|5": {
"acc": 0.771112865035517,
"acc_stderr": 0.011807360224025398
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_NekoPunchBBB__Llama-2-13b-hf_Open-Platypus
|
[
"region:us"
] |
2023-09-11T15:11:57+00:00
|
{"pretty_name": "Evaluation run of NekoPunchBBB/Llama-2-13b-hf_Open-Platypus", "dataset_summary": "Dataset automatically created during the evaluation run of model [NekoPunchBBB/Llama-2-13b-hf_Open-Platypus](https://huggingface.co/NekoPunchBBB/Llama-2-13b-hf_Open-Platypus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NekoPunchBBB__Llama-2-13b-hf_Open-Platypus\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T23:55:29.524806](https://huggingface.co/datasets/open-llm-leaderboard/details_NekoPunchBBB__Llama-2-13b-hf_Open-Platypus/blob/main/results_2023-10-28T23-55-29.524806.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388544,\n \"f1\": 0.05985213926174496,\n \"f1_stderr\": 0.0013641672120704657,\n \"acc\": 0.4325617395685546,\n \"acc_stderr\": 0.009923090021448928\n },\n \"harness|drop|3\": {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388544,\n \"f1\": 0.05985213926174496,\n \"f1_stderr\": 0.0013641672120704657\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09401061410159212,\n \"acc_stderr\": 0.00803881981887246\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.771112865035517,\n \"acc_stderr\": 0.011807360224025398\n }\n}\n```", "repo_url": "https://huggingface.co/NekoPunchBBB/Llama-2-13b-hf_Open-Platypus", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|arc:challenge|25_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T23_55_29.524806", "path": ["**/details_harness|drop|3_2023-10-28T23-55-29.524806.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T23-55-29.524806.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T23_55_29.524806", "path": ["**/details_harness|gsm8k|5_2023-10-28T23-55-29.524806.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T23-55-29.524806.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hellaswag|10_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T16-11-41.270351.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T16-11-41.270351.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T16-11-41.270351.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T23_55_29.524806", "path": ["**/details_harness|winogrande|5_2023-10-28T23-55-29.524806.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T23-55-29.524806.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T16_11_41.270351", "path": ["results_2023-09-11T16-11-41.270351.parquet"]}, {"split": "2023_10_28T23_55_29.524806", "path": ["results_2023-10-28T23-55-29.524806.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T23-55-29.524806.parquet"]}]}]}
|
2023-10-28T22:55:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of NekoPunchBBB/Llama-2-13b-hf_Open-Platypus
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NekoPunchBBB/Llama-2-13b-hf_Open-Platypus on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T23:55:29.524806(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of NekoPunchBBB/Llama-2-13b-hf_Open-Platypus",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NekoPunchBBB/Llama-2-13b-hf_Open-Platypus on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T23:55:29.524806(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NekoPunchBBB/Llama-2-13b-hf_Open-Platypus",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NekoPunchBBB/Llama-2-13b-hf_Open-Platypus on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T23:55:29.524806(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
31,
31,
179,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NekoPunchBBB/Llama-2-13b-hf_Open-Platypus## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NekoPunchBBB/Llama-2-13b-hf_Open-Platypus on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T23:55:29.524806(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8dd5e7c2318d31dcdf3ee9b4c25468321cd2d65a
|
# Dataset Card for "imdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pietrolesci/imdb
|
[
"region:us"
] |
2023-09-11T15:18:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "embedding_all-MiniLM-L12-v2", "data_files": [{"split": "train", "path": "embedding_all-MiniLM-L12-v2/train-*"}, {"split": "test", "path": "embedding_all-MiniLM-L12-v2/test-*"}]}, {"config_name": "embedding_all-mpnet-base-v2", "data_files": [{"split": "train", "path": "embedding_all-mpnet-base-v2/train-*"}, {"split": "test", "path": "embedding_all-mpnet-base-v2/test-*"}]}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "data_files": [{"split": "train", "path": "embedding_multi-qa-mpnet-base-dot-v1/train-*"}, {"split": "test", "path": "embedding_multi-qa-mpnet-base-dot-v1/test-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 33632823, "num_examples": 25000}, {"name": "test", "num_bytes": 32850685, "num_examples": 25000}], "download_size": 41729077, "dataset_size": 66483508}, {"config_name": "embedding_all-MiniLM-L12-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-MiniLM-L12-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 38700000, "num_examples": 25000}, {"name": "test", "num_bytes": 38700000, "num_examples": 25000}], "download_size": 108242075, "dataset_size": 77400000}, {"config_name": "embedding_all-mpnet-base-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 77100000, "num_examples": 25000}, {"name": "test", "num_bytes": 77100000, "num_examples": 25000}], "download_size": 185073496, "dataset_size": 154200000}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_multi-qa-mpnet-base-dot-v1", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 77100000, "num_examples": 25000}, {"name": "test", "num_bytes": 77100000, "num_examples": 25000}], "download_size": 185072395, "dataset_size": 154200000}]}
|
2023-09-11T15:19:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "imdb"
More Information needed
|
[
"# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
431161c10a41a50b02d078303100e7c870a6cc67
|
# Airoboros 2.2 Dealignment
This is a dealignment extraction of the airoboros-2.2 dataset which can be found [here](https://huggingface.co/datasets/jondurbin/airoboros-2.2).
**ALL CREDITS TO [@jondurbin](https://huggingface.co/jondurbin) FOR THIS AWESOME DATASET!**
**YOU MUST HAVE ACCESS TO THE ORIGINAL DATASET BEFORE REQUESTING ACCESS TO THIS DATASET!** \
(But I can't check if you actually have it or not so I set it to auto approval.)
# Original README.md
## Overview
This dataset is mostly a continuation of https://hf.co/datasets/jondurbin/airoboros-2.1, with some notable additions and fixes.
__*I've gated access with request, due to the de-alignment data. To download, you must agree to the following:*__
- Some of the content is "toxic"/"harmful", and contains profanity and other types of sensitive content.
- None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.
- Use with extreme caution, particularly in locations with less-than-free speech laws.
- You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities.
### 2.1 Contamination
I accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here:
- https://huggingface.co/jondurbin/airoboros-l2-70b-2.1/discussions/3#64f325ce352152814d1f796a
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/225#64f0997659da193a12b78c32
I flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect.
Some of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions.
This time around, I used `thenlper/gte-small` to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know!
I haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap.
### Awareness
I added a new "awareness" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt.
For example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence.
If, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from "home" and will provide a more nuanced answer as a human would (in theory).
https://github.com/jondurbin/airoboros/commit/e91562c88d7610edb051606622e7c25a99884f7e
### Editor
I created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like "Please correct and improve the text." with the original well-written text and target output.
https://github.com/jondurbin/airoboros/commit/e60a68de5f9622320c9cfff3b238bd83cc7e373b
### Writing
I regenerated (almost) all of the training data that included "Once upon a time..." because it's too cliche and boring.
### Multiple choice
I created many more multiple choice questions, many of which have additional text context.
### Roleplay/conversation
I re-created all of the GTKM and RP datasets this time around, removing all of the "USER: " and "ASSISTANT: " prefixes from the instructions/responses, so it's more compatible with existing interfaces.
The GTKM instructor now does the same thing as RP, in that it saves each round of "conversation" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs.
### De-alignment
I included a small sampling of "de-alignment" data. The llama-2 base models seem extremely reluctant to discuss certain topics, curse, or otherwise produce other-than-pg content. I don't want a vile model, but I also don't *NOT* want a vile model.
- comedy skits, to add more comedy and occasional cursing
- instruction/response pairs that would typically otherwise be refused
- various (LLM ehanced) stories from the internet with somewhat spicy content
- story-writing tasks as a re-telling of popular horror/fantasy films (by default, the LLM generated stories often have too sunny of a disposition, so hopefully this will allow for some twists or more interesting stories)
- rude responses (if a character card specifies they are rude and curse, it should actually do so without prompt hacking IMO)
None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. Use with extreme caution, particularly in locations with strict speech laws!
See "instructions-clean.jsonl" for a version without dealignment data.
### UTF-8 to ASCII
I replaced most of the "standard" utf-8 sequences - left double quote, right double quote, left apostraphe, ellipses - with standard ascii characters. I don't know if this was contributing to part of the issue with eos tokens being produced after apostraphes, but I figured it was worth trying.
### Summarization
I also included 500 examples from:
https://hf.co/datasets/mattpscott/airoboros-summarization
These are existing summarizarions from various public datasets, formatted to airoboros style contextual qa.
Thanks Matt!
### Usage/license info
Much (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about "competing" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting.
|
v2ray/airoboros-2.2-dealignment
|
[
"license:other",
"region:us"
] |
2023-09-11T15:27:46+00:00
|
{"license": "other"}
|
2023-09-11T15:46:59+00:00
|
[] |
[] |
TAGS
#license-other #region-us
|
# Airoboros 2.2 Dealignment
This is a dealignment extraction of the airoboros-2.2 dataset which can be found here.
ALL CREDITS TO @jondurbin FOR THIS AWESOME DATASET!
YOU MUST HAVE ACCESS TO THE ORIGINAL DATASET BEFORE REQUESTING ACCESS TO THIS DATASET! \
(But I can't check if you actually have it or not so I set it to auto approval.)
# Original URL
## Overview
This dataset is mostly a continuation of URL with some notable additions and fixes.
__*I've gated access with request, due to the de-alignment data. To download, you must agree to the following:*__
- Some of the content is "toxic"/"harmful", and contains profanity and other types of sensitive content.
- None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.
- Use with extreme caution, particularly in locations with less-than-free speech laws.
- You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities.
### 2.1 Contamination
I accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here:
- URL
- URL
I flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect.
Some of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions.
This time around, I used 'thenlper/gte-small' to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know!
I haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap.
### Awareness
I added a new "awareness" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt.
For example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence.
If, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from "home" and will provide a more nuanced answer as a human would (in theory).
URL
### Editor
I created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like "Please correct and improve the text." with the original well-written text and target output.
URL
### Writing
I regenerated (almost) all of the training data that included "Once upon a time..." because it's too cliche and boring.
### Multiple choice
I created many more multiple choice questions, many of which have additional text context.
### Roleplay/conversation
I re-created all of the GTKM and RP datasets this time around, removing all of the "USER: " and "ASSISTANT: " prefixes from the instructions/responses, so it's more compatible with existing interfaces.
The GTKM instructor now does the same thing as RP, in that it saves each round of "conversation" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs.
### De-alignment
I included a small sampling of "de-alignment" data. The llama-2 base models seem extremely reluctant to discuss certain topics, curse, or otherwise produce other-than-pg content. I don't want a vile model, but I also don't *NOT* want a vile model.
- comedy skits, to add more comedy and occasional cursing
- instruction/response pairs that would typically otherwise be refused
- various (LLM ehanced) stories from the internet with somewhat spicy content
- story-writing tasks as a re-telling of popular horror/fantasy films (by default, the LLM generated stories often have too sunny of a disposition, so hopefully this will allow for some twists or more interesting stories)
- rude responses (if a character card specifies they are rude and curse, it should actually do so without prompt hacking IMO)
None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. Use with extreme caution, particularly in locations with strict speech laws!
See "URL" for a version without dealignment data.
### UTF-8 to ASCII
I replaced most of the "standard" utf-8 sequences - left double quote, right double quote, left apostraphe, ellipses - with standard ascii characters. I don't know if this was contributing to part of the issue with eos tokens being produced after apostraphes, but I figured it was worth trying.
### Summarization
I also included 500 examples from:
URL
These are existing summarizarions from various public datasets, formatted to airoboros style contextual qa.
Thanks Matt!
### Usage/license info
Much (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about "competing" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting.
|
[
"# Airoboros 2.2 Dealignment\n\nThis is a dealignment extraction of the airoboros-2.2 dataset which can be found here.\n\nALL CREDITS TO @jondurbin FOR THIS AWESOME DATASET!\n\nYOU MUST HAVE ACCESS TO THE ORIGINAL DATASET BEFORE REQUESTING ACCESS TO THIS DATASET! \\\n(But I can't check if you actually have it or not so I set it to auto approval.)",
"# Original URL",
"## Overview\n\nThis dataset is mostly a continuation of URL with some notable additions and fixes.\n\n__*I've gated access with request, due to the de-alignment data. To download, you must agree to the following:*__\n- Some of the content is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content.\n- None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.\n- Use with extreme caution, particularly in locations with less-than-free speech laws.\n- You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities.",
"### 2.1 Contamination\n\nI accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here:\n- URL\n- URL\n\nI flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect.\n\nSome of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions.\n\nThis time around, I used 'thenlper/gte-small' to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know!\n\nI haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap.",
"### Awareness\n\nI added a new \"awareness\" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt.\n\nFor example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence.\nIf, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from \"home\" and will provide a more nuanced answer as a human would (in theory).\n\nURL",
"### Editor\n\nI created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like \"Please correct and improve the text.\" with the original well-written text and target output.\n\nURL",
"### Writing\n\nI regenerated (almost) all of the training data that included \"Once upon a time...\" because it's too cliche and boring.",
"### Multiple choice\n\nI created many more multiple choice questions, many of which have additional text context.",
"### Roleplay/conversation\n\nI re-created all of the GTKM and RP datasets this time around, removing all of the \"USER: \" and \"ASSISTANT: \" prefixes from the instructions/responses, so it's more compatible with existing interfaces.\n\nThe GTKM instructor now does the same thing as RP, in that it saves each round of \"conversation\" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs.",
"### De-alignment\n\nI included a small sampling of \"de-alignment\" data. The llama-2 base models seem extremely reluctant to discuss certain topics, curse, or otherwise produce other-than-pg content. I don't want a vile model, but I also don't *NOT* want a vile model.\n\n- comedy skits, to add more comedy and occasional cursing\n- instruction/response pairs that would typically otherwise be refused\n- various (LLM ehanced) stories from the internet with somewhat spicy content\n- story-writing tasks as a re-telling of popular horror/fantasy films (by default, the LLM generated stories often have too sunny of a disposition, so hopefully this will allow for some twists or more interesting stories)\n- rude responses (if a character card specifies they are rude and curse, it should actually do so without prompt hacking IMO)\n\nNone of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. Use with extreme caution, particularly in locations with strict speech laws!\n\nSee \"URL\" for a version without dealignment data.",
"### UTF-8 to ASCII\n\nI replaced most of the \"standard\" utf-8 sequences - left double quote, right double quote, left apostraphe, ellipses - with standard ascii characters. I don't know if this was contributing to part of the issue with eos tokens being produced after apostraphes, but I figured it was worth trying.",
"### Summarization\n\nI also included 500 examples from:\nURL\n\nThese are existing summarizarions from various public datasets, formatted to airoboros style contextual qa.\n\nThanks Matt!",
"### Usage/license info\n\nMuch (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about \"competing\" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting."
] |
[
"TAGS\n#license-other #region-us \n",
"# Airoboros 2.2 Dealignment\n\nThis is a dealignment extraction of the airoboros-2.2 dataset which can be found here.\n\nALL CREDITS TO @jondurbin FOR THIS AWESOME DATASET!\n\nYOU MUST HAVE ACCESS TO THE ORIGINAL DATASET BEFORE REQUESTING ACCESS TO THIS DATASET! \\\n(But I can't check if you actually have it or not so I set it to auto approval.)",
"# Original URL",
"## Overview\n\nThis dataset is mostly a continuation of URL with some notable additions and fixes.\n\n__*I've gated access with request, due to the de-alignment data. To download, you must agree to the following:*__\n- Some of the content is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content.\n- None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.\n- Use with extreme caution, particularly in locations with less-than-free speech laws.\n- You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities.",
"### 2.1 Contamination\n\nI accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here:\n- URL\n- URL\n\nI flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect.\n\nSome of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions.\n\nThis time around, I used 'thenlper/gte-small' to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know!\n\nI haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap.",
"### Awareness\n\nI added a new \"awareness\" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt.\n\nFor example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence.\nIf, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from \"home\" and will provide a more nuanced answer as a human would (in theory).\n\nURL",
"### Editor\n\nI created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like \"Please correct and improve the text.\" with the original well-written text and target output.\n\nURL",
"### Writing\n\nI regenerated (almost) all of the training data that included \"Once upon a time...\" because it's too cliche and boring.",
"### Multiple choice\n\nI created many more multiple choice questions, many of which have additional text context.",
"### Roleplay/conversation\n\nI re-created all of the GTKM and RP datasets this time around, removing all of the \"USER: \" and \"ASSISTANT: \" prefixes from the instructions/responses, so it's more compatible with existing interfaces.\n\nThe GTKM instructor now does the same thing as RP, in that it saves each round of \"conversation\" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs.",
"### De-alignment\n\nI included a small sampling of \"de-alignment\" data. The llama-2 base models seem extremely reluctant to discuss certain topics, curse, or otherwise produce other-than-pg content. I don't want a vile model, but I also don't *NOT* want a vile model.\n\n- comedy skits, to add more comedy and occasional cursing\n- instruction/response pairs that would typically otherwise be refused\n- various (LLM ehanced) stories from the internet with somewhat spicy content\n- story-writing tasks as a re-telling of popular horror/fantasy films (by default, the LLM generated stories often have too sunny of a disposition, so hopefully this will allow for some twists or more interesting stories)\n- rude responses (if a character card specifies they are rude and curse, it should actually do so without prompt hacking IMO)\n\nNone of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. Use with extreme caution, particularly in locations with strict speech laws!\n\nSee \"URL\" for a version without dealignment data.",
"### UTF-8 to ASCII\n\nI replaced most of the \"standard\" utf-8 sequences - left double quote, right double quote, left apostraphe, ellipses - with standard ascii characters. I don't know if this was contributing to part of the issue with eos tokens being produced after apostraphes, but I figured it was worth trying.",
"### Summarization\n\nI also included 500 examples from:\nURL\n\nThese are existing summarizarions from various public datasets, formatted to airoboros style contextual qa.\n\nThanks Matt!",
"### Usage/license info\n\nMuch (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about \"competing\" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting."
] |
[
11,
108,
3,
188,
256,
136,
79,
36,
21,
126,
277,
83,
42,
67
] |
[
"passage: TAGS\n#license-other #region-us \n# Airoboros 2.2 Dealignment\n\nThis is a dealignment extraction of the airoboros-2.2 dataset which can be found here.\n\nALL CREDITS TO @jondurbin FOR THIS AWESOME DATASET!\n\nYOU MUST HAVE ACCESS TO THE ORIGINAL DATASET BEFORE REQUESTING ACCESS TO THIS DATASET! \\\n(But I can't check if you actually have it or not so I set it to auto approval.)# Original URL## Overview\n\nThis dataset is mostly a continuation of URL with some notable additions and fixes.\n\n__*I've gated access with request, due to the de-alignment data. To download, you must agree to the following:*__\n- Some of the content is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content.\n- None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.\n- Use with extreme caution, particularly in locations with less-than-free speech laws.\n- You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities.",
"passage: ### 2.1 Contamination\n\nI accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here:\n- URL\n- URL\n\nI flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect.\n\nSome of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions.\n\nThis time around, I used 'thenlper/gte-small' to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know!\n\nI haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap.### Awareness\n\nI added a new \"awareness\" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt.\n\nFor example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence.\nIf, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from \"home\" and will provide a more nuanced answer as a human would (in theory).\n\nURL### Editor\n\nI created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like \"Please correct and improve the text.\" with the original well-written text and target output.\n\nURL### Writing\n\nI regenerated (almost) all of the training data that included \"Once upon a time...\" because it's too cliche and boring.### Multiple choice\n\nI created many more multiple choice questions, many of which have additional text context.### Roleplay/conversation\n\nI re-created all of the GTKM and RP datasets this time around, removing all of the \"USER: \" and \"ASSISTANT: \" prefixes from the instructions/responses, so it's more compatible with existing interfaces.\n\nThe GTKM instructor now does the same thing as RP, in that it saves each round of \"conversation\" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs."
] |
7b98bbd018845cc842641ca68c7ab64db64fe3e9
|
# Dataset of sagisawa_fumika/鷺沢文香/사기사와후미카 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of sagisawa_fumika/鷺沢文香/사기사와후미카 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `blue_eyes, long_hair, black_hair, hairband, breasts, large_breasts, bangs, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 692.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagisawa_fumika_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 410.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagisawa_fumika_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1258 | 889.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagisawa_fumika_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 615.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagisawa_fumika_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1258 | 1.19 GiB | [Download](https://huggingface.co/datasets/CyberHarem/sagisawa_fumika_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sagisawa_fumika_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, solo, looking_at_viewer, blush, floral_print, hair_flower, obi, smile, print_kimono, single_hair_bun, wide_sleeves, yukata, alternate_hairstyle, blue_kimono, long_sleeves, upper_body |
| 1 | 32 |  |  |  |  |  | 1girl, solo, hair_flower, looking_at_viewer, bare_shoulders, smile, blush, black_dress, black_gloves, blue_rose, cleavage, floral_print, necklace, strapless_dress, print_gloves, earrings, white_background, collarbone, simple_background, tiara |
| 2 | 7 |  |  |  |  |  | 1girl, holding_book, long_sleeves, looking_at_viewer, shawl, solo, blush, smile |
| 3 | 6 |  |  |  |  |  | 1girl, blush, holding_book, shawl, solo, sweater, looking_at_viewer, simple_background, white_background, long_sleeves, smile |
| 4 | 9 |  |  |  |  |  | 1girl, holding_book, looking_at_viewer, necklace, off-shoulder_sweater, shawl, solo, bare_shoulders, blush, pendant, collarbone, ribbed_sweater, long_sleeves, cleavage |
| 5 | 19 |  |  |  |  |  | 1girl, elbow_gloves, solo, looking_at_viewer, blush, open_mouth, smile, white_gloves, puffy_short_sleeves, microphone, blue_dress |
| 6 | 7 |  |  |  |  |  | 1girl, blush, cleavage, day, looking_at_viewer, outdoors, solo, cloud, blue_bikini, blue_sky, collarbone, navel, ocean, beach, smile, water, bare_shoulders, jacket, open_clothes, open_mouth, thighs |
| 7 | 7 |  |  |  |  |  | 1girl, bare_shoulders, blush, bridal_garter, cleavage, looking_at_viewer, navel, side_ponytail, solo, white_skirt, blue_sky, collarbone, day, outdoors, star_(symbol), bracelet, heart_necklace, miniskirt, arm_garter, bow, brown_hair, frilled_bikini, open_mouth, strapless, :d, hair_ribbon, sidelocks, thighs, white_bikini |
| 8 | 9 |  |  |  |  |  | 1girl, enmaided, looking_at_viewer, maid_headdress, solo, juliet_sleeves, bespectacled, maid_apron, single_hair_bun, blush, frills, smile, black_dress, holding, official_alternate_costume, brooch, official_alternate_hairstyle, round_eyewear, simple_background, white_background |
| 9 | 5 |  |  |  |  |  | 1girl, blue_bow, blush, bowtie, cat_ears, cat_hood, cat_tail, crescent_pin, frills, looking_at_viewer, solo, spider_web_print, brooch, cross-laced_clothes, long_sleeves, underbust, blue_dress, diadem, hands_up, paw_pose, star_(symbol), animal_ear_fluff, argyle, blue_ribbon, halloween_costume, hood_up, hooded_capelet, parted_lips, simple_background, smile, white_background |
| 10 | 5 |  |  |  |  |  | 1girl, fake_animal_ears, playboy_bunny, rabbit_ears, solo, wrist_cuffs, bare_shoulders, bowtie, cleavage, detached_collar, looking_at_viewer, black_leotard, blush, rabbit_tail, sitting, black_bow, black_footwear, covered_navel, fishnet_pantyhose, high_heels, smile, strapless_leotard |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | blush | floral_print | hair_flower | obi | smile | print_kimono | single_hair_bun | wide_sleeves | yukata | alternate_hairstyle | blue_kimono | long_sleeves | upper_body | bare_shoulders | black_dress | black_gloves | blue_rose | cleavage | necklace | strapless_dress | print_gloves | earrings | white_background | collarbone | simple_background | tiara | holding_book | shawl | sweater | off-shoulder_sweater | pendant | ribbed_sweater | elbow_gloves | open_mouth | white_gloves | puffy_short_sleeves | microphone | blue_dress | day | outdoors | cloud | blue_bikini | blue_sky | navel | ocean | beach | water | jacket | open_clothes | thighs | bridal_garter | side_ponytail | white_skirt | star_(symbol) | bracelet | heart_necklace | miniskirt | arm_garter | bow | brown_hair | frilled_bikini | strapless | :d | hair_ribbon | sidelocks | white_bikini | enmaided | maid_headdress | juliet_sleeves | bespectacled | maid_apron | frills | holding | official_alternate_costume | brooch | official_alternate_hairstyle | round_eyewear | blue_bow | bowtie | cat_ears | cat_hood | cat_tail | crescent_pin | spider_web_print | cross-laced_clothes | underbust | diadem | hands_up | paw_pose | animal_ear_fluff | argyle | blue_ribbon | halloween_costume | hood_up | hooded_capelet | parted_lips | fake_animal_ears | playboy_bunny | rabbit_ears | wrist_cuffs | detached_collar | black_leotard | rabbit_tail | sitting | black_bow | black_footwear | covered_navel | fishnet_pantyhose | high_heels | strapless_leotard |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-------|:--------------------|:--------|:---------------|:--------------|:------|:--------|:---------------|:------------------|:---------------|:---------|:----------------------|:--------------|:---------------|:-------------|:-----------------|:--------------|:---------------|:------------|:-----------|:-----------|:------------------|:---------------|:-----------|:-------------------|:-------------|:--------------------|:--------|:---------------|:--------|:----------|:-----------------------|:----------|:-----------------|:---------------|:-------------|:---------------|:----------------------|:-------------|:-------------|:------|:-----------|:--------|:--------------|:-----------|:--------|:--------|:--------|:--------|:---------|:---------------|:---------|:----------------|:----------------|:--------------|:----------------|:-----------|:-----------------|:------------|:-------------|:------|:-------------|:-----------------|:------------|:-----|:--------------|:------------|:---------------|:-----------|:-----------------|:-----------------|:---------------|:-------------|:---------|:----------|:-----------------------------|:---------|:-------------------------------|:----------------|:-----------|:---------|:-----------|:-----------|:-----------|:---------------|:-------------------|:----------------------|:------------|:---------|:-----------|:-----------|:-------------------|:---------|:--------------|:--------------------|:----------|:-----------------|:--------------|:-------------------|:----------------|:--------------|:--------------|:------------------|:----------------|:--------------|:----------|:------------|:-----------------|:----------------|:--------------------|:-------------|:--------------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 32 |  |  |  |  |  | X | X | X | X | X | X | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | | | | X | | | | | | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | | | | X | | | | | | | X | | | | | | | | | | | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | X | | X | | | | X | X | | | | | X | | | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 19 |  |  |  |  |  | X | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | X | X | | | | X | | | | | | | | | X | | | | X | | | | | | X | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | | | X | | | | X | | | | | | X | | | | | | | | | | X | | | | | X | X | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 9 |  |  |  |  |  | X | X | X | X | | | | X | | X | | | | | | | | X | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | X | X | X | | | | X | | | | | | | X | | | | | | | | | | | X | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | X | X | X | | | | X | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/sagisawa_fumika_idolmastercinderellagirls
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T15:30:27+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T09:38:25+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of sagisawa\_fumika/鷺沢文香/사기사와후미카 (THE iDOLM@STER: Cinderella Girls)
===========================================================================
This is the dataset of sagisawa\_fumika/鷺沢文香/사기사와후미카 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are 'blue\_eyes, long\_hair, black\_hair, hairband, breasts, large\_breasts, bangs, hair\_between\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ee3f32618cd8ae8a0bd9bab09f0f7138447b05e9
|
# Dataset Card for "egw_quick_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gideonah/egw_quick_instruct
|
[
"region:us"
] |
2023-09-11T15:31:33+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10909, "num_examples": 9}], "download_size": 8828, "dataset_size": 10909}}
|
2023-09-11T15:31:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "egw_quick_instruct"
More Information needed
|
[
"# Dataset Card for \"egw_quick_instruct\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"egw_quick_instruct\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"egw_quick_instruct\"\n\nMore Information needed"
] |
d8f00bde8c23802162237c0c65cb5841624cb5e8
|
# Dataset Card for "amazoncat-13k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pietrolesci/amazoncat-13k
|
[
"region:us"
] |
2023-09-11T15:33:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "embedding_all-MiniLM-L12-v2", "data_files": [{"split": "train", "path": "embedding_all-MiniLM-L12-v2/train-*"}, {"split": "test", "path": "embedding_all-MiniLM-L12-v2/test-*"}]}, {"config_name": "embedding_all-mpnet-base-v2", "data_files": [{"split": "train", "path": "embedding_all-mpnet-base-v2/train-*"}, {"split": "test", "path": "embedding_all-mpnet-base-v2/test-*"}]}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "data_files": [{"split": "train", "path": "embedding_multi-qa-mpnet-base-dot-v1/train-*"}, {"split": "test", "path": "embedding_multi-qa-mpnet-base-dot-v1/test-*"}]}, {"config_name": "labels", "data_files": [{"split": "train", "path": "labels/train-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "uid_original", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "target_ind", "sequence": "int64"}, {"name": "target_rel", "sequence": "float64"}, {"name": "text", "dtype": "string"}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3262662835, "num_examples": 1186239}, {"name": "test", "num_bytes": 842174854, "num_examples": 306782}], "download_size": 2560646204, "dataset_size": 4104837689}, {"config_name": "embedding_all-MiniLM-L12-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-MiniLM-L12-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1836297972, "num_examples": 1186239}, {"name": "test", "num_bytes": 474898536, "num_examples": 306782}], "download_size": 3228756828, "dataset_size": 2311196508}, {"config_name": "embedding_all-mpnet-base-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 3658361076, "num_examples": 1186239}, {"name": "test", "num_bytes": 946115688, "num_examples": 306782}], "download_size": 5524926640, "dataset_size": 4604476764}, {"config_name": "embedding_multi-qa-mpnet-base-dot-v1", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_multi-qa-mpnet-base-dot-v1", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 3658361076, "num_examples": 1186239}, {"name": "test", "num_bytes": 946115688, "num_examples": 306782}], "download_size": 5524904909, "dataset_size": 4604476764}, {"config_name": "labels", "features": [{"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 243277, "num_examples": 13331}], "download_size": 160461, "dataset_size": 243277}]}
|
2023-10-02T17:01:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "amazoncat-13k"
More Information needed
|
[
"# Dataset Card for \"amazoncat-13k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"amazoncat-13k\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"amazoncat-13k\"\n\nMore Information needed"
] |
bd91ee8ef04aa964051ad4f172f3ea7fd4ec00fc
|
# Dataset Card for "sciwritingdataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bvand086/sciwritingdataset
|
[
"region:us"
] |
2023-09-11T15:40:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 455418, "num_examples": 8787}], "download_size": 264827, "dataset_size": 455418}}
|
2023-09-11T15:40:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sciwritingdataset"
More Information needed
|
[
"# Dataset Card for \"sciwritingdataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sciwritingdataset\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sciwritingdataset\"\n\nMore Information needed"
] |
907dfb6302fa848aedd042c8078128351c6ea904
|
# Dataset Card for Evaluation run of ahxt/llama2_xs_460M_experimental
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ahxt/llama2_xs_460M_experimental
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T21:46:15.240855](https://huggingface.co/datasets/open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental/blob/main/results_2023-10-24T21-46-15.240855.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002936241610738255,
"em_stderr": 0.0005541113054709602,
"f1": 0.055131082214765176,
"f1_stderr": 0.0014074468297557536,
"acc": 0.2494080505130229,
"acc_stderr": 0.007026223145264506
},
"harness|drop|3": {
"em": 0.002936241610738255,
"em_stderr": 0.0005541113054709602,
"f1": 0.055131082214765176,
"f1_stderr": 0.0014074468297557536
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.4988161010260458,
"acc_stderr": 0.014052446290529012
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental
|
[
"region:us"
] |
2023-09-11T15:45:17+00:00
|
{"pretty_name": "Evaluation run of ahxt/llama2_xs_460M_experimental", "dataset_summary": "Dataset automatically created during the evaluation run of model [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T21:46:15.240855](https://huggingface.co/datasets/open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental/blob/main/results_2023-10-24T21-46-15.240855.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002936241610738255,\n \"em_stderr\": 0.0005541113054709602,\n \"f1\": 0.055131082214765176,\n \"f1_stderr\": 0.0014074468297557536,\n \"acc\": 0.2494080505130229,\n \"acc_stderr\": 0.007026223145264506\n },\n \"harness|drop|3\": {\n \"em\": 0.002936241610738255,\n \"em_stderr\": 0.0005541113054709602,\n \"f1\": 0.055131082214765176,\n \"f1_stderr\": 0.0014074468297557536\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4988161010260458,\n \"acc_stderr\": 0.014052446290529012\n }\n}\n```", "repo_url": "https://huggingface.co/ahxt/llama2_xs_460M_experimental", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|arc:challenge|25_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T21_46_15.240855", "path": ["**/details_harness|drop|3_2023-10-24T21-46-15.240855.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T21-46-15.240855.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T21_46_15.240855", "path": ["**/details_harness|gsm8k|5_2023-10-24T21-46-15.240855.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T21-46-15.240855.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hellaswag|10_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T16-45-07.137608.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T16-45-07.137608.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T16-45-07.137608.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T21_46_15.240855", "path": ["**/details_harness|winogrande|5_2023-10-24T21-46-15.240855.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T21-46-15.240855.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T16_45_07.137608", "path": ["results_2023-09-11T16-45-07.137608.parquet"]}, {"split": "2023_10_24T21_46_15.240855", "path": ["results_2023-10-24T21-46-15.240855.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T21-46-15.240855.parquet"]}]}]}
|
2023-10-24T20:46:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of ahxt/llama2_xs_460M_experimental
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model ahxt/llama2_xs_460M_experimental on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T21:46:15.240855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of ahxt/llama2_xs_460M_experimental",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ahxt/llama2_xs_460M_experimental on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T21:46:15.240855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ahxt/llama2_xs_460M_experimental",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ahxt/llama2_xs_460M_experimental on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T21:46:15.240855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ahxt/llama2_xs_460M_experimental## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ahxt/llama2_xs_460M_experimental on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T21:46:15.240855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
15592a09897d7d9cb0162298239de37b8d259c4a
|
# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Mikivis/gpt2-large-lora-sft1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Mikivis/gpt2-large-lora-sft1](https://huggingface.co/Mikivis/gpt2-large-lora-sft1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T12:39:31.040652](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft1/blob/main/results_2023-10-27T12-39-31.040652.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.010591442953020135,
"em_stderr": 0.0010483469790502319,
"f1": 0.08433934563758401,
"f1_stderr": 0.001890128394422962,
"acc": 0.27229676400947117,
"acc_stderr": 0.006998242518864891
},
"harness|drop|3": {
"em": 0.010591442953020135,
"em_stderr": 0.0010483469790502319,
"f1": 0.08433934563758401,
"f1_stderr": 0.001890128394422962
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5445935280189423,
"acc_stderr": 0.013996485037729782
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft1
|
[
"region:us"
] |
2023-09-11T16:00:16+00:00
|
{"pretty_name": "Evaluation run of Mikivis/gpt2-large-lora-sft1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Mikivis/gpt2-large-lora-sft1](https://huggingface.co/Mikivis/gpt2-large-lora-sft1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T12:39:31.040652](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft1/blob/main/results_2023-10-27T12-39-31.040652.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.010591442953020135,\n \"em_stderr\": 0.0010483469790502319,\n \"f1\": 0.08433934563758401,\n \"f1_stderr\": 0.001890128394422962,\n \"acc\": 0.27229676400947117,\n \"acc_stderr\": 0.006998242518864891\n },\n \"harness|drop|3\": {\n \"em\": 0.010591442953020135,\n \"em_stderr\": 0.0010483469790502319,\n \"f1\": 0.08433934563758401,\n \"f1_stderr\": 0.001890128394422962\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5445935280189423,\n \"acc_stderr\": 0.013996485037729782\n }\n}\n```", "repo_url": "https://huggingface.co/Mikivis/gpt2-large-lora-sft1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_27T12_39_31.040652", "path": ["**/details_harness|drop|3_2023-10-27T12-39-31.040652.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T12-39-31.040652.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_27T12_39_31.040652", "path": ["**/details_harness|gsm8k|5_2023-10-27T12-39-31.040652.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T12-39-31.040652.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-00-06.378151.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-00-06.378151.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-00-06.378151.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_27T12_39_31.040652", "path": ["**/details_harness|winogrande|5_2023-10-27T12-39-31.040652.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T12-39-31.040652.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_00_06.378151", "path": ["results_2023-09-11T17-00-06.378151.parquet"]}, {"split": "2023_10_27T12_39_31.040652", "path": ["results_2023-10-27T12-39-31.040652.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T12-39-31.040652.parquet"]}]}]}
|
2023-10-27T11:39:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T12:39:31.040652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T12:39:31.040652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T12:39:31.040652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T12:39:31.040652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3232a6fbdda82d95100b18a8883ef7072fcccdeb
|
# Dataset Card for "sheldon_dialogues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fenilgandhi/sheldon_dialogues
|
[
"region:us"
] |
2023-09-11T16:01:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8311715, "num_examples": 11217}], "download_size": 1623617, "dataset_size": 8311715}}
|
2023-09-11T17:08:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sheldon_dialogues"
More Information needed
|
[
"# Dataset Card for \"sheldon_dialogues\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sheldon_dialogues\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sheldon_dialogues\"\n\nMore Information needed"
] |
3f72565f2a349d5cf3d79fcd461078db37d11df5
|
# Dataset of erika (Pokémon)
This is the dataset of erika (Pokémon), containing 500 images and their tags.
The core tags of this character are `short_hair, black_hair, hairband, breasts, bangs, red_hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 524.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/erika_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 323.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/erika_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1102 | 628.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/erika_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 469.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/erika_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1102 | 845.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/erika_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/erika_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, official_alternate_costume, smile, grey_eyes, closed_mouth, eyelashes, solo, looking_at_viewer, green_dress, green_hairband, hair_flower, short_sleeves, yellow_hairband, blush, long_sleeves, white_flower |
| 1 | 12 |  |  |  |  |  | 1girl, kimono, smile, solo, blush, brown_eyes, open_mouth, looking_at_viewer |
| 2 | 11 |  |  |  |  |  | 1girl, holding_poke_ball, poke_ball_(basic), smile, solo, brown_eyes, yellow_kimono, blush, looking_at_viewer, wide_sleeves |
| 3 | 7 |  |  |  |  |  | 1girl, closed_mouth, green_kimono, smile, holding_poke_ball, looking_at_viewer, poke_ball_(basic), solo, wide_sleeves, blush, eyelashes, grey_eyes, long_sleeves, hands_up, hakama |
| 4 | 14 |  |  |  |  |  | 1girl, hakama_skirt, solo, long_sleeves, simple_background, white_background, wide_sleeves, looking_at_viewer, closed_mouth, red_hakama, standing, black_eyes, smile, yellow_kimono, full_body, blush, sandals, tabi, white_socks, grey_eyes, eyelashes, headband |
| 5 | 5 |  |  |  |  |  | 1girl, blush, nipples, solo, black_eyes, large_breasts, open_clothes, naked_kimono, navel, nude, pussy, uncensored, yellow_kimono |
| 6 | 9 |  |  |  |  |  | 1boy, 1girl, hetero, nipples, blush, sex, solo_focus, vaginal, large_breasts, pussy, female_pubic_hair, smile, collarbone, girl_on_top, kimono, no_bra, nude, cowgirl_position, mosaic_censoring, no_panties, penis, spread_legs, sweat |
| 7 | 5 |  |  |  |  |  | 1girl, blush, hetero, nipples, sex, solo_focus, 1boy, bar_censor, cum_in_pussy, kimono, large_breasts, open_mouth, penis, vaginal, on_back, spread_legs, bound, brown_eyes, hakama, missionary, poke_ball, socks, tears |
| 8 | 13 |  |  |  |  |  | 1girl, 1boy, hetero, blush, large_breasts, nude, open_mouth, tongue_out, licking_penis, nipples, solo_focus, kimono, mosaic_censoring, black_eyes, cum_in_mouth, facial, grey_eyes, saliva, simple_background |
| 9 | 6 |  |  |  |  |  | 1girl, brown_eyes, hetero, medium_breasts, mosaic_censoring, solo_focus, cum_on_body, dark-skinned_male, headband, multiple_boys, multiple_penises, cum_string, empty_eyes, fat_man, navel, nipples, nude, brown_hair, cum_in_mouth, gangbang, interracial, mind_control |
| 10 | 9 |  |  |  |  |  | navel, smile, 1girl, blush, collarbone, cleavage, large_breasts, green_bikini, looking_at_viewer, outdoors, sarong, sitting, solo, bare_shoulders, barefoot, beach, day, sky |
| 11 | 5 |  |  |  |  |  | 1girl, armpit_hair, blush, female_pubic_hair, huge_breasts, nipples, solo, sweat, armpits, erection, excessive_pubic_hair, huge_penis, large_areolae, large_penis, nude, futanari, headband, navel, open_mouth, smile, steaming_body, abs, arm_behind_head, bar_censor, bob_cut, foreskin, gigantic_breasts, gloves, green_background, large_testicles, looking_at_viewer, mosaic_censoring, muscular_female, precum, shiny_skin, simple_background, thighs, veiny_penis |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | official_alternate_costume | smile | grey_eyes | closed_mouth | eyelashes | solo | looking_at_viewer | green_dress | green_hairband | hair_flower | short_sleeves | yellow_hairband | blush | long_sleeves | white_flower | kimono | brown_eyes | open_mouth | holding_poke_ball | poke_ball_(basic) | yellow_kimono | wide_sleeves | green_kimono | hands_up | hakama | hakama_skirt | simple_background | white_background | red_hakama | standing | black_eyes | full_body | sandals | tabi | white_socks | headband | nipples | large_breasts | open_clothes | naked_kimono | navel | nude | pussy | uncensored | 1boy | hetero | sex | solo_focus | vaginal | female_pubic_hair | collarbone | girl_on_top | no_bra | cowgirl_position | mosaic_censoring | no_panties | penis | spread_legs | sweat | bar_censor | cum_in_pussy | on_back | bound | missionary | poke_ball | socks | tears | tongue_out | licking_penis | cum_in_mouth | facial | saliva | medium_breasts | cum_on_body | dark-skinned_male | multiple_boys | multiple_penises | cum_string | empty_eyes | fat_man | brown_hair | gangbang | interracial | mind_control | cleavage | green_bikini | outdoors | sarong | sitting | bare_shoulders | barefoot | beach | day | sky | armpit_hair | huge_breasts | armpits | erection | excessive_pubic_hair | huge_penis | large_areolae | large_penis | futanari | steaming_body | abs | arm_behind_head | bob_cut | foreskin | gigantic_breasts | gloves | green_background | large_testicles | muscular_female | precum | shiny_skin | thighs | veiny_penis |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------------------|:--------|:------------|:---------------|:------------|:-------|:--------------------|:--------------|:-----------------|:--------------|:----------------|:------------------|:--------|:---------------|:---------------|:---------|:-------------|:-------------|:--------------------|:--------------------|:----------------|:---------------|:---------------|:-----------|:---------|:---------------|:--------------------|:-------------------|:-------------|:-----------|:-------------|:------------|:----------|:-------|:--------------|:-----------|:----------|:----------------|:---------------|:---------------|:--------|:-------|:--------|:-------------|:-------|:---------|:------|:-------------|:----------|:--------------------|:-------------|:--------------|:---------|:-------------------|:-------------------|:-------------|:--------|:--------------|:--------|:-------------|:---------------|:----------|:--------|:-------------|:------------|:--------|:--------|:-------------|:----------------|:---------------|:---------|:---------|:-----------------|:--------------|:--------------------|:----------------|:-------------------|:-------------|:-------------|:----------|:-------------|:-----------|:--------------|:---------------|:-----------|:---------------|:-----------|:---------|:----------|:-----------------|:-----------|:--------|:------|:------|:--------------|:---------------|:----------|:-----------|:-----------------------|:-------------|:----------------|:--------------|:-----------|:----------------|:------|:------------------|:----------|:-----------|:-------------------|:---------|:-------------------|:------------------|:------------------|:---------|:-------------|:---------|:--------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | | X | | | | X | X | | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | | X | | | | X | X | | | | | | X | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | X | X | X | X | X | X | | | | | | X | X | | | | | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | | X | X | X | X | X | X | | | | | | X | X | | | | | | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | | X | | | | | | | X | | | | | | | | X | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | | X | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | X | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | | | | | | | | | | | X | | | X | X | X | | | | | | | X | | | | | | | | | | | | X | X | | | | | | | X | X | X | X | X | | | | | | | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 13 |  |  |  |  |  | X | | | X | | | | | | | | | | X | | | X | | X | | | | | | | | | X | | | | X | | | | | | X | X | | | | X | | | X | X | | X | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | | | | X | X | | | | X | | X | | | | | | | X | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 9 |  |  |  |  |  | X | | X | | | | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | | X | | | | X | X | | | | | | X | | | | | X | | | | | | | | | X | | | | | | | | | X | X | | | | X | X | | | | | | | | X | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/erika_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T16:01:55+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T20:11:13+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of erika (Pokémon)
==========================
This is the dataset of erika (Pokémon), containing 500 images and their tags.
The core tags of this character are 'short\_hair, black\_hair, hairband, breasts, bangs, red\_hairband', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2fc4c844376f02c3d8298622adf83deae9cec2b7
|
# Dataset of jeanne_d_arc_alter/ジャンヌ・ダルク〔オルタ〕/贞德〔Alter〕 (Fate/Grand Order)
This is the dataset of jeanne_d_arc_alter/ジャンヌ・ダルク〔オルタ〕/贞德〔Alter〕 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `yellow_eyes, breasts, bangs, ahoge, large_breasts, white_hair, long_hair, very_long_hair, short_hair, hair_between_eyes, grey_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 848.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jeanne_d_arc_alter_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 472.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jeanne_d_arc_alter_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1304 | 1007.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jeanne_d_arc_alter_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 751.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jeanne_d_arc_alter_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1304 | 1.39 GiB | [Download](https://huggingface.co/datasets/CyberHarem/jeanne_d_arc_alter_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/jeanne_d_arc_alter_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, armored_dress, black_dress, black_thighhighs, flag, gauntlets, headpiece, solo, chain, looking_at_viewer, fur-trimmed_cape, fur_collar, holding_sword, black_cape, grin |
| 1 | 8 |  |  |  |  |  | 1girl, armored_dress, black_dress, black_thighhighs, gauntlets, headpiece, looking_at_viewer, solo, blonde_hair, fur_trim, parted_lips, holding_sword, flag, standing, black_cape, smile |
| 2 | 31 |  |  |  |  |  | 1girl, bare_shoulders, fur_trim, headpiece, solo, armored_dress, looking_at_viewer, black_gloves, chain, gauntlets, smile, thighhighs, cleavage, holding_sword, navel_cutout, elbow_gloves, black_dress, flag, armored_boots, blonde_hair, parted_lips, medium_breasts |
| 3 | 6 |  |  |  |  |  | 1girl, bare_shoulders, black_gloves, cleavage, elbow_gloves, hair_flower, looking_at_viewer, official_alternate_costume, purple_dress, solo, black_thighhighs, blush, choker, red_ribbon, strapless_dress, collarbone, neck_ribbon, purple_flower, sitting, smile |
| 4 | 19 |  |  |  |  |  | 1girl, black_dress, official_alternate_costume, solo, looking_at_viewer, jacket, short_dress, long_sleeves, smile, holding, sword, blue_coat, fur-trimmed_coat, necklace, open_coat, black_footwear, knee_boots |
| 5 | 11 |  |  |  |  |  | 1girl, black_dress, collarbone, long_sleeves, official_alternate_costume, solo, cleavage, necklace, short_dress, blush, fur-trimmed_coat, jacket, looking_at_viewer, blue_coat, fur-trimmed_sleeves, open_coat, thighs, black_footwear, boots, closed_mouth, cowboy_shot, white_background, zipper |
| 6 | 6 |  |  |  |  |  | 1girl, black_bra, cleavage, collarbone, looking_at_viewer, solo, blush, black_panties, closed_mouth, navel |
| 7 | 9 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, lingerie, looking_at_viewer, solo, black_panties, collarbone, navel, garter_belt, thighs, black_gloves, black_thighhighs, blush, choker, babydoll, jewelry, simple_background, smile, bra, closed_mouth, cosplay, see-through, underwear_only, white_background |
| 8 | 5 |  |  |  |  |  | 1girl, bare_shoulders, black_bikini, blush, cleavage, looking_at_viewer, outdoors, solo, thighs, beach, choker, collarbone, navel, o-ring_bikini, ocean, armpits, arms_behind_head, arms_up, blue_sky, braid, day, kneeling, necklace, official_alternate_costume, barefoot, closed_mouth, sand, smile |
| 9 | 8 |  |  |  |  |  | 1girl, black_bikini, black_gloves, black_jacket, cleavage, katana, looking_at_viewer, o-ring_bikini, shrug_(clothing), solo, cropped_jacket, holding_sword, red_thighhighs, single_thighhigh, thigh_strap, navel, o-ring_bottom, long_sleeves, unsheathed, black_choker, high_heels, o-ring_top, smile, thighs |
| 10 | 7 |  |  |  |  |  | 1girl, bare_shoulders, black_leotard, cleavage, looking_at_viewer, playboy_bunny, solo, blush, fake_animal_ears, rabbit_ears, detached_collar, strapless_leotard, covered_navel, fishnet_pantyhose, highleg_leotard, simple_background, thighs, white_background, wrist_cuffs |
| 11 | 8 |  |  |  |  |  | 1girl, hair_flower, looking_at_viewer, solo, black_kimono, holding, obi, upper_body, fur_collar, fur_trim, wide_sleeves, blush, closed_mouth, floral_print, headpiece, oil-paper_umbrella |
| 12 | 5 |  |  |  |  |  | 1girl, blush, collared_shirt, long_sleeves, school_uniform, solo, white_shirt, black_skirt, looking_at_viewer, pleated_skirt, thighs, black_thighhighs, open_jacket, simple_background, smile, white_background, alternate_costume, black_jacket, blazer, closed_mouth, collarbone, dress_shirt, red_bowtie, sitting |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | armored_dress | black_dress | black_thighhighs | flag | gauntlets | headpiece | solo | chain | looking_at_viewer | fur-trimmed_cape | fur_collar | holding_sword | black_cape | grin | blonde_hair | fur_trim | parted_lips | standing | smile | bare_shoulders | black_gloves | thighhighs | cleavage | navel_cutout | elbow_gloves | armored_boots | medium_breasts | hair_flower | official_alternate_costume | purple_dress | blush | choker | red_ribbon | strapless_dress | collarbone | neck_ribbon | purple_flower | sitting | jacket | short_dress | long_sleeves | holding | sword | blue_coat | fur-trimmed_coat | necklace | open_coat | black_footwear | knee_boots | fur-trimmed_sleeves | thighs | boots | closed_mouth | cowboy_shot | white_background | zipper | black_bra | black_panties | navel | lingerie | garter_belt | babydoll | jewelry | simple_background | bra | cosplay | see-through | underwear_only | black_bikini | outdoors | beach | o-ring_bikini | ocean | armpits | arms_behind_head | arms_up | blue_sky | braid | day | kneeling | barefoot | sand | black_jacket | katana | shrug_(clothing) | cropped_jacket | red_thighhighs | single_thighhigh | thigh_strap | o-ring_bottom | unsheathed | black_choker | high_heels | o-ring_top | black_leotard | playboy_bunny | fake_animal_ears | rabbit_ears | detached_collar | strapless_leotard | covered_navel | fishnet_pantyhose | highleg_leotard | wrist_cuffs | black_kimono | obi | upper_body | wide_sleeves | floral_print | oil-paper_umbrella | collared_shirt | school_uniform | white_shirt | black_skirt | pleated_skirt | open_jacket | alternate_costume | blazer | dress_shirt | red_bowtie |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:----------------|:--------------|:-------------------|:-------|:------------|:------------|:-------|:--------|:--------------------|:-------------------|:-------------|:----------------|:-------------|:-------|:--------------|:-----------|:--------------|:-----------|:--------|:-----------------|:---------------|:-------------|:-----------|:---------------|:---------------|:----------------|:-----------------|:--------------|:-----------------------------|:---------------|:--------|:---------|:-------------|:------------------|:-------------|:--------------|:----------------|:----------|:---------|:--------------|:---------------|:----------|:--------|:------------|:-------------------|:-----------|:------------|:-----------------|:-------------|:----------------------|:---------|:--------|:---------------|:--------------|:-------------------|:---------|:------------|:----------------|:--------|:-----------|:--------------|:-----------|:----------|:--------------------|:------|:----------|:--------------|:-----------------|:---------------|:-----------|:--------|:----------------|:--------|:----------|:-------------------|:----------|:-----------|:--------|:------|:-----------|:-----------|:-------|:---------------|:---------|:-------------------|:-----------------|:-----------------|:-------------------|:--------------|:----------------|:-------------|:---------------|:-------------|:-------------|:----------------|:----------------|:-------------------|:--------------|:------------------|:--------------------|:----------------|:--------------------|:------------------|:--------------|:---------------|:------|:-------------|:---------------|:---------------|:---------------------|:-----------------|:-----------------|:--------------|:--------------|:----------------|:--------------|:--------------------|:---------|:--------------|:-------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 31 |  |  |  |  |  | X | X | X | | X | X | X | X | X | X | | | X | | | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | | | | X | | X | | | | | | | | | | X | X | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 19 |  |  |  |  |  | X | | X | | | | | X | | X | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | | X | | | | | X | | X | | | | | | | | | | | | | | X | | | | | | X | | X | | | | X | | | | X | X | X | | | X | X | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | | X | | X | | | | | | | | | | | | | | X | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | X | | | | X | | X | | | | | | | | | | X | X | X | | X | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | | | | | X | | X | | | | | | | | | | X | X | | | X | | | | | | X | | X | X | | | X | | | | | | | | | | | X | | | | | X | | X | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 8 |  |  |  |  |  | X | | | | | | | X | | X | | | X | | | | | | | X | | X | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 7 |  |  |  |  |  | X | | | | | | | X | | X | | | | | | | | | | | X | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 11 | 8 |  |  |  |  |  | X | | | | | | X | X | | X | | X | | | | | X | | | | | | | | | | | | X | | | X | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | |
| 12 | 5 |  |  |  |  |  | X | | | X | | | | X | | X | | | | | | | | | | X | | | | | | | | | | | | X | | | | X | | | X | | | X | | | | | | | | | | X | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/jeanne_d_arc_alter_fgo
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T16:07:41+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-11T14:56:13+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of jeanne\_d\_arc\_alter/ジャンヌ・ダルク〔オルタ〕/贞德〔Alter〕 (Fate/Grand Order)
===========================================================================
This is the dataset of jeanne\_d\_arc\_alter/ジャンヌ・ダルク〔オルタ〕/贞德〔Alter〕 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are 'yellow\_eyes, breasts, bangs, ahoge, large\_breasts, white\_hair, long\_hair, very\_long\_hair, short\_hair, hair\_between\_eyes, grey\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
1d150ffd298482582e46a7a1b91d56a83262d1e5
|
# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_converted_200
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/_fsx_shared-falcon-180B_converted_200
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [_fsx_shared-falcon-180B_converted_200](https://huggingface.co/_fsx_shared-falcon-180B_converted_200) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details__fsx_shared-falcon-180B_converted_200",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-11T17:08:24.221910](https://huggingface.co/datasets/open-llm-leaderboard/details__fsx_shared-falcon-180B_converted_200/blob/main/results_2023-09-11T17-08-24.221910.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"mc1": 0.43451652386780903,
"mc1_stderr": 0.017352738749259564,
"mc2": 0.6219568610455939,
"mc2_stderr": 0.01539764866197052
},
"harness|truthfulqa:mc|0": {
"mc1": 0.43451652386780903,
"mc1_stderr": 0.017352738749259564,
"mc2": 0.6219568610455939,
"mc2_stderr": 0.01539764866197052
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details__fsx_shared-falcon-180B_converted_200
|
[
"region:us"
] |
2023-09-11T16:08:25+00:00
|
{"pretty_name": "Evaluation run of _fsx_shared-falcon-180B_converted_200", "dataset_summary": "Dataset automatically created during the evaluation run of model [_fsx_shared-falcon-180B_converted_200](https://huggingface.co/_fsx_shared-falcon-180B_converted_200) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details__fsx_shared-falcon-180B_converted_200\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-11T17:08:24.221910](https://huggingface.co/datasets/open-llm-leaderboard/details__fsx_shared-falcon-180B_converted_200/blob/main/results_2023-09-11T17-08-24.221910.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"mc1\": 0.43451652386780903,\n \"mc1_stderr\": 0.017352738749259564,\n \"mc2\": 0.6219568610455939,\n \"mc2_stderr\": 0.01539764866197052\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.43451652386780903,\n \"mc1_stderr\": 0.017352738749259564,\n \"mc2\": 0.6219568610455939,\n \"mc2_stderr\": 0.01539764866197052\n }\n}\n```", "repo_url": "https://huggingface.co/_fsx_shared-falcon-180B_converted_200", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_08_24.221910", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-08-24.221910.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-08-24.221910.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_08_24.221910", "path": ["results_2023-09-11T17-08-24.221910.parquet"]}, {"split": "latest", "path": ["results_2023-09-11T17-08-24.221910.parquet"]}]}]}
|
2023-09-11T16:08:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_converted_200
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_converted_200 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-11T17:08:24.221910(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_converted_200",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_converted_200 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-11T17:08:24.221910(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_converted_200",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_converted_200 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-11T17:08:24.221910(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of _fsx_shared-falcon-180B_converted_200## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model _fsx_shared-falcon-180B_converted_200 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-11T17:08:24.221910(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
590c6b39a090773b9b89769fa9dc24ebcb159762
|
# Dataset Card for Evaluation run of TFLai/Nova-13B-50-step
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TFLai/Nova-13B-50-step
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TFLai/Nova-13B-50-step](https://huggingface.co/TFLai/Nova-13B-50-step) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TFLai__Nova-13B-50-step_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T00:47:35.220505](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Nova-13B-50-step_public/blob/main/results_2023-11-07T00-47-35.220505.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.01143036912751678,
"em_stderr": 0.0010886127371891202,
"f1": 0.08822462248322198,
"f1_stderr": 0.001908858914337863,
"acc": 0.40478035487648495,
"acc_stderr": 0.00877689827768222
},
"harness|drop|3": {
"em": 0.01143036912751678,
"em_stderr": 0.0010886127371891202,
"f1": 0.08822462248322198,
"f1_stderr": 0.001908858914337863
},
"harness|gsm8k|5": {
"acc": 0.04397270659590599,
"acc_stderr": 0.005647666449126457
},
"harness|winogrande|5": {
"acc": 0.7655880031570639,
"acc_stderr": 0.011906130106237983
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TFLai__Nova-13B-50-step
|
[
"region:us"
] |
2023-09-11T16:10:25+00:00
|
{"pretty_name": "Evaluation run of TFLai/Nova-13B-50-step", "dataset_summary": "Dataset automatically created during the evaluation run of model [TFLai/Nova-13B-50-step](https://huggingface.co/TFLai/Nova-13B-50-step) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TFLai__Nova-13B-50-step_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T00:47:35.220505](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Nova-13B-50-step_public/blob/main/results_2023-11-07T00-47-35.220505.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.01143036912751678,\n \"em_stderr\": 0.0010886127371891202,\n \"f1\": 0.08822462248322198,\n \"f1_stderr\": 0.001908858914337863,\n \"acc\": 0.40478035487648495,\n \"acc_stderr\": 0.00877689827768222\n },\n \"harness|drop|3\": {\n \"em\": 0.01143036912751678,\n \"em_stderr\": 0.0010886127371891202,\n \"f1\": 0.08822462248322198,\n \"f1_stderr\": 0.001908858914337863\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04397270659590599,\n \"acc_stderr\": 0.005647666449126457\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7655880031570639,\n \"acc_stderr\": 0.011906130106237983\n }\n}\n```", "repo_url": "https://huggingface.co/TFLai/Nova-13B-50-step", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T04_37_37.202596", "path": ["**/details_harness|drop|3_2023-11-05T04-37-37.202596.parquet"]}, {"split": "2023_11_07T00_47_35.220505", "path": ["**/details_harness|drop|3_2023-11-07T00-47-35.220505.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T00-47-35.220505.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T04_37_37.202596", "path": ["**/details_harness|gsm8k|5_2023-11-05T04-37-37.202596.parquet"]}, {"split": "2023_11_07T00_47_35.220505", "path": ["**/details_harness|gsm8k|5_2023-11-07T00-47-35.220505.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T00-47-35.220505.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T04_37_37.202596", "path": ["**/details_harness|winogrande|5_2023-11-05T04-37-37.202596.parquet"]}, {"split": "2023_11_07T00_47_35.220505", "path": ["**/details_harness|winogrande|5_2023-11-07T00-47-35.220505.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T00-47-35.220505.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T04_37_37.202596", "path": ["results_2023-11-05T04-37-37.202596.parquet"]}, {"split": "2023_11_07T00_47_35.220505", "path": ["results_2023-11-07T00-47-35.220505.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T00-47-35.220505.parquet"]}]}]}
|
2023-12-01T14:18:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TFLai/Nova-13B-50-step
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TFLai/Nova-13B-50-step on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-07T00:47:35.220505(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TFLai/Nova-13B-50-step",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TFLai/Nova-13B-50-step on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T00:47:35.220505(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TFLai/Nova-13B-50-step",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TFLai/Nova-13B-50-step on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T00:47:35.220505(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TFLai/Nova-13B-50-step## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TFLai/Nova-13B-50-step on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T00:47:35.220505(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
bea3e10bd76ab4f196915df6e27d1bebd3bab8a6
|
# Dataset Card for "wuerstchen"
Dataset was generated using the code below:
```py
import torch
from datasets import Dataset, Features
from datasets import Image as ImageFeature
from datasets import Value, load_dataset
from diffusers import AutoPipelineForText2Image
import PIL
def main():
print("Loading dataset...")
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
print("Loading pipeline...")
seed = 0
device = "cuda"
generator = torch.Generator(device).manual_seed(seed)
dtype = torch.float16
ckpt_id = "warp-diffusion/wuerstchen"
pipeline = AutoPipelineForText2Image.from_pretrained(
ckpt_id, torch_dtype=dtype
).to(device)
pipeline.prior_prior = torch.compile(pipeline.prior_prior, mode="reduce-overhead", fullgraph=True)
pipeline.decoder = torch.compile(pipeline.decoder, mode="reduce-overhead", fullgraph=True)
print("Running inference...")
main_dict = {}
for i in range(len(parti_prompts)):
sample = parti_prompts[i]
prompt = sample["Prompt"]
image = pipeline(
prompt=prompt,
height=1024,
width=1024,
prior_guidance_scale=4.0,
decoder_guidance_scale=0.0,
generator=generator,
).images[0]
image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
img_path = f"wuerstchen_{i}.png"
image.save(img_path)
main_dict.update(
{
prompt: {
"img_path": img_path,
"Category": sample["Category"],
"Challenge": sample["Challenge"],
"Note": sample["Note"],
"model_name": ckpt_id,
"seed": seed,
}
}
)
def generation_fn():
for prompt in main_dict:
prompt_entry = main_dict[prompt]
yield {
"Prompt": prompt,
"Category": prompt_entry["Category"],
"Challenge": prompt_entry["Challenge"],
"Note": prompt_entry["Note"],
"images": {"path": prompt_entry["img_path"]},
"model_name": prompt_entry["model_name"],
"seed": prompt_entry["seed"],
}
print("Preparing HF dataset...")
ds = Dataset.from_generator(
generation_fn,
features=Features(
Prompt=Value("string"),
Category=Value("string"),
Challenge=Value("string"),
Note=Value("string"),
images=ImageFeature(),
model_name=Value("string"),
seed=Value("int64"),
),
)
ds_id = "diffusers-parti-prompts/wuerstchen"
ds.push_to_hub(ds_id)
if __name__ == "__main__":
main()
```
|
diffusers-parti-prompts/wuerstchen
|
[
"region:us"
] |
2023-09-11T16:12:20+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Challenge", "dtype": "string"}, {"name": "Note", "dtype": "string"}, {"name": "images", "dtype": "image"}, {"name": "model_name", "dtype": "string"}, {"name": "seed", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 149898953.312, "num_examples": 1632}], "download_size": 150261013, "dataset_size": 149898953.312}}
|
2023-09-13T16:08:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wuerstchen"
Dataset was generated using the code below:
|
[
"# Dataset Card for \"wuerstchen\"\n\nDataset was generated using the code below:"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wuerstchen\"\n\nDataset was generated using the code below:"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wuerstchen\"\n\nDataset was generated using the code below:"
] |
87bffb0b1cecff638c2e8796057aa53c1f7f3f4a
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T15:11:31.782321](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified1/blob/main/results_2023-10-24T15-11-31.782321.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.22074244966442952,
"em_stderr": 0.004247399285462808,
"f1": 0.26249685402684586,
"f1_stderr": 0.004281519928844823,
"acc": 0.3537213083265467,
"acc_stderr": 0.008025969949525829
},
"harness|drop|3": {
"em": 0.22074244966442952,
"em_stderr": 0.004247399285462808,
"f1": 0.26249685402684586,
"f1_stderr": 0.004281519928844823
},
"harness|gsm8k|5": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499639
},
"harness|winogrande|5": {
"acc": 0.6945540647198106,
"acc_stderr": 0.012945038632552018
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified1
|
[
"region:us"
] |
2023-09-11T16:12:57+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T15:11:31.782321](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified1/blob/main/results_2023-10-24T15-11-31.782321.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.22074244966442952,\n \"em_stderr\": 0.004247399285462808,\n \"f1\": 0.26249685402684586,\n \"f1_stderr\": 0.004281519928844823,\n \"acc\": 0.3537213083265467,\n \"acc_stderr\": 0.008025969949525829\n },\n \"harness|drop|3\": {\n \"em\": 0.22074244966442952,\n \"em_stderr\": 0.004247399285462808,\n \"f1\": 0.26249685402684586,\n \"f1_stderr\": 0.004281519928844823\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01288855193328279,\n \"acc_stderr\": 0.003106901266499639\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6945540647198106,\n \"acc_stderr\": 0.012945038632552018\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T15_11_31.782321", "path": ["**/details_harness|drop|3_2023-10-24T15-11-31.782321.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T15-11-31.782321.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T15_11_31.782321", "path": ["**/details_harness|gsm8k|5_2023-10-24T15-11-31.782321.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T15-11-31.782321.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-12-40.999408.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-12-40.999408.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-12-40.999408.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T15_11_31.782321", "path": ["**/details_harness|winogrande|5_2023-10-24T15-11-31.782321.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T15-11-31.782321.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_12_40.999408", "path": ["results_2023-09-11T17-12-40.999408.parquet"]}, {"split": "2023_10_24T15_11_31.782321", "path": ["results_2023-10-24T15-11-31.782321.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T15-11-31.782321.parquet"]}]}]}
|
2023-10-24T14:11:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T15:11:31.782321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T15:11:31.782321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T15:11:31.782321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
30,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T15:11:31.782321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
34364b7a463c6fbf08d2347740157d7891214d9f
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T22:58:11.726308](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified2/blob/main/results_2023-10-27T22-58-11.726308.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.23531879194630873,
"em_stderr": 0.004344183534613289,
"f1": 0.2819253355704704,
"f1_stderr": 0.004370090480372652,
"acc": 0.35187260684561084,
"acc_stderr": 0.007592132282444493
},
"harness|drop|3": {
"em": 0.23531879194630873,
"em_stderr": 0.004344183534613289,
"f1": 0.2819253355704704,
"f1_stderr": 0.004370090480372652
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022545148
},
"harness|winogrande|5": {
"acc": 0.696921862667719,
"acc_stderr": 0.01291672746263447
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified2
|
[
"region:us"
] |
2023-09-11T16:15:40+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T22:58:11.726308](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified2/blob/main/results_2023-10-27T22-58-11.726308.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.23531879194630873,\n \"em_stderr\": 0.004344183534613289,\n \"f1\": 0.2819253355704704,\n \"f1_stderr\": 0.004370090480372652,\n \"acc\": 0.35187260684561084,\n \"acc_stderr\": 0.007592132282444493\n },\n \"harness|drop|3\": {\n \"em\": 0.23531879194630873,\n \"em_stderr\": 0.004344183534613289,\n \"f1\": 0.2819253355704704,\n \"f1_stderr\": 0.004370090480372652\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.0022675371022545148\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.696921862667719,\n \"acc_stderr\": 0.01291672746263447\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_27T22_58_11.726308", "path": ["**/details_harness|drop|3_2023-10-27T22-58-11.726308.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T22-58-11.726308.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_27T22_58_11.726308", "path": ["**/details_harness|gsm8k|5_2023-10-27T22-58-11.726308.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T22-58-11.726308.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-15-24.260844.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-15-24.260844.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-15-24.260844.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_27T22_58_11.726308", "path": ["**/details_harness|winogrande|5_2023-10-27T22-58-11.726308.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T22-58-11.726308.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_15_24.260844", "path": ["results_2023-09-11T17-15-24.260844.parquet"]}, {"split": "2023_10_27T22_58_11.726308", "path": ["results_2023-10-27T22-58-11.726308.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T22-58-11.726308.parquet"]}]}]}
|
2023-10-27T21:58:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T22:58:11.726308(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T22:58:11.726308(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T22:58:11.726308(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
30,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T22:58:11.726308(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
292b4f01fefd76f9dc7092451d45cf2cbd6866a2
|
# Dataset Card for Evaluation run of Writer/palmyra-med-20b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Writer/palmyra-med-20b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Writer/palmyra-med-20b](https://huggingface.co/Writer/palmyra-med-20b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Writer__palmyra-med-20b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T07:43:51.319096](https://huggingface.co/datasets/open-llm-leaderboard/details_Writer__palmyra-med-20b/blob/main/results_2023-10-27T07-43-51.319096.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.05851510067114094,
"em_stderr": 0.0024037002515447264,
"f1": 0.11879089765100655,
"f1_stderr": 0.002692444748823806,
"acc": 0.34002374380150946,
"acc_stderr": 0.008900409699475408
},
"harness|drop|3": {
"em": 0.05851510067114094,
"em_stderr": 0.0024037002515447264,
"f1": 0.11879089765100655,
"f1_stderr": 0.002692444748823806
},
"harness|gsm8k|5": {
"acc": 0.026535253980288095,
"acc_stderr": 0.004427045987265168
},
"harness|winogrande|5": {
"acc": 0.6535122336227308,
"acc_stderr": 0.013373773411685648
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Writer__palmyra-med-20b
|
[
"region:us"
] |
2023-09-11T16:21:33+00:00
|
{"pretty_name": "Evaluation run of Writer/palmyra-med-20b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Writer/palmyra-med-20b](https://huggingface.co/Writer/palmyra-med-20b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Writer__palmyra-med-20b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T07:43:51.319096](https://huggingface.co/datasets/open-llm-leaderboard/details_Writer__palmyra-med-20b/blob/main/results_2023-10-27T07-43-51.319096.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.05851510067114094,\n \"em_stderr\": 0.0024037002515447264,\n \"f1\": 0.11879089765100655,\n \"f1_stderr\": 0.002692444748823806,\n \"acc\": 0.34002374380150946,\n \"acc_stderr\": 0.008900409699475408\n },\n \"harness|drop|3\": {\n \"em\": 0.05851510067114094,\n \"em_stderr\": 0.0024037002515447264,\n \"f1\": 0.11879089765100655,\n \"f1_stderr\": 0.002692444748823806\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.026535253980288095,\n \"acc_stderr\": 0.004427045987265168\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6535122336227308,\n \"acc_stderr\": 0.013373773411685648\n }\n}\n```", "repo_url": "https://huggingface.co/Writer/palmyra-med-20b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|arc:challenge|25_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_27T07_43_51.319096", "path": ["**/details_harness|drop|3_2023-10-27T07-43-51.319096.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T07-43-51.319096.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_27T07_43_51.319096", "path": ["**/details_harness|gsm8k|5_2023-10-27T07-43-51.319096.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T07-43-51.319096.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hellaswag|10_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-21-21.677448.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-12T21-53-25.718910.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-12T21-53-25.718910.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-12T21-53-25.718910.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_27T07_43_51.319096", "path": ["**/details_harness|winogrande|5_2023-10-27T07-43-51.319096.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T07-43-51.319096.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_21_21.677448", "path": ["results_2023-09-11T17-21-21.677448.parquet"]}, {"split": "2023_09_12T21_53_25.718910", "path": ["results_2023-09-12T21-53-25.718910.parquet"]}, {"split": "2023_10_27T07_43_51.319096", "path": ["results_2023-10-27T07-43-51.319096.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T07-43-51.319096.parquet"]}]}]}
|
2023-10-27T06:44:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Writer/palmyra-med-20b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Writer/palmyra-med-20b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T07:43:51.319096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Writer/palmyra-med-20b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Writer/palmyra-med-20b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T07:43:51.319096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Writer/palmyra-med-20b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Writer/palmyra-med-20b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T07:43:51.319096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Writer/palmyra-med-20b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Writer/palmyra-med-20b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T07:43:51.319096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
caa7e6d51653267271ad00d04cdca432658f9e06
|
# Dataset Card for Evaluation run of Undi95/CreativityEngine
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Undi95/CreativityEngine
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Undi95/CreativityEngine](https://huggingface.co/Undi95/CreativityEngine) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__CreativityEngine",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T03:34:54.369545](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__CreativityEngine/blob/main/results_2023-10-28T03-34-54.369545.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.24706375838926176,
"em_stderr": 0.00441695804511364,
"f1": 0.32981753355704885,
"f1_stderr": 0.004357223834591547,
"acc": 0.4187184690035083,
"acc_stderr": 0.010197442302564062
},
"harness|drop|3": {
"em": 0.24706375838926176,
"em_stderr": 0.00441695804511364,
"f1": 0.32981753355704885,
"f1_stderr": 0.004357223834591547
},
"harness|gsm8k|5": {
"acc": 0.09552691432903715,
"acc_stderr": 0.008096605771155733
},
"harness|winogrande|5": {
"acc": 0.7419100236779794,
"acc_stderr": 0.01229827883397239
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Undi95__CreativityEngine
|
[
"region:us"
] |
2023-09-11T16:22:49+00:00
|
{"pretty_name": "Evaluation run of Undi95/CreativityEngine", "dataset_summary": "Dataset automatically created during the evaluation run of model [Undi95/CreativityEngine](https://huggingface.co/Undi95/CreativityEngine) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__CreativityEngine\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T03:34:54.369545](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__CreativityEngine/blob/main/results_2023-10-28T03-34-54.369545.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24706375838926176,\n \"em_stderr\": 0.00441695804511364,\n \"f1\": 0.32981753355704885,\n \"f1_stderr\": 0.004357223834591547,\n \"acc\": 0.4187184690035083,\n \"acc_stderr\": 0.010197442302564062\n },\n \"harness|drop|3\": {\n \"em\": 0.24706375838926176,\n \"em_stderr\": 0.00441695804511364,\n \"f1\": 0.32981753355704885,\n \"f1_stderr\": 0.004357223834591547\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09552691432903715,\n \"acc_stderr\": 0.008096605771155733\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7419100236779794,\n \"acc_stderr\": 0.01229827883397239\n }\n}\n```", "repo_url": "https://huggingface.co/Undi95/CreativityEngine", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T03_34_54.369545", "path": ["**/details_harness|drop|3_2023-10-28T03-34-54.369545.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T03-34-54.369545.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T03_34_54.369545", "path": ["**/details_harness|gsm8k|5_2023-10-28T03-34-54.369545.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T03-34-54.369545.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-22-32.752077.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-22-32.752077.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-22-32.752077.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T03_34_54.369545", "path": ["**/details_harness|winogrande|5_2023-10-28T03-34-54.369545.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T03-34-54.369545.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_22_32.752077", "path": ["results_2023-09-11T17-22-32.752077.parquet"]}, {"split": "2023_10_28T03_34_54.369545", "path": ["results_2023-10-28T03-34-54.369545.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T03-34-54.369545.parquet"]}]}]}
|
2023-10-28T02:35:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Undi95/CreativityEngine
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Undi95/CreativityEngine on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T03:34:54.369545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Undi95/CreativityEngine",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/CreativityEngine on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T03:34:54.369545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Undi95/CreativityEngine",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/CreativityEngine on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T03:34:54.369545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
18,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Undi95/CreativityEngine## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/CreativityEngine on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T03:34:54.369545(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f16f224768b9a9f1759b338232118a70eed3a162
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch](https://huggingface.co/CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T07:23:25.462349](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch/blob/main/results_2023-10-29T07-23-25.462349.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.018141778523489933,
"em_stderr": 0.0013667968592600756,
"f1": 0.07543204697986576,
"f1_stderr": 0.001821957230823281,
"acc": 0.4184426148579471,
"acc_stderr": 0.009471869145583567
},
"harness|drop|3": {
"em": 0.018141778523489933,
"em_stderr": 0.0013667968592600756,
"f1": 0.07543204697986576,
"f1_stderr": 0.001821957230823281
},
"harness|gsm8k|5": {
"acc": 0.07050796057619409,
"acc_stderr": 0.0070515438139836075
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183527
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch
|
[
"region:us"
] |
2023-09-11T16:28:07+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch](https://huggingface.co/CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-29T07:23:25.462349](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch/blob/main/results_2023-10-29T07-23-25.462349.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.018141778523489933,\n \"em_stderr\": 0.0013667968592600756,\n \"f1\": 0.07543204697986576,\n \"f1_stderr\": 0.001821957230823281,\n \"acc\": 0.4184426148579471,\n \"acc_stderr\": 0.009471869145583567\n },\n \"harness|drop|3\": {\n \"em\": 0.018141778523489933,\n \"em_stderr\": 0.0013667968592600756,\n \"f1\": 0.07543204697986576,\n \"f1_stderr\": 0.001821957230823281\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07050796057619409,\n \"acc_stderr\": 0.0070515438139836075\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183527\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_29T07_23_25.462349", "path": ["**/details_harness|drop|3_2023-10-29T07-23-25.462349.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-29T07-23-25.462349.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_29T07_23_25.462349", "path": ["**/details_harness|gsm8k|5_2023-10-29T07-23-25.462349.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-29T07-23-25.462349.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-27-50.905630.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-27-50.905630.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-27-50.905630.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_29T07_23_25.462349", "path": ["**/details_harness|winogrande|5_2023-10-29T07-23-25.462349.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-29T07-23-25.462349.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_27_50.905630", "path": ["results_2023-09-11T17-27-50.905630.parquet"]}, {"split": "2023_10_29T07_23_25.462349", "path": ["results_2023-10-29T07-23-25.462349.parquet"]}, {"split": "latest", "path": ["results_2023-10-29T07-23-25.462349.parquet"]}]}]}
|
2023-10-29T07:23:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-29T07:23:25.462349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T07:23:25.462349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T07:23:25.462349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
38,
31,
186,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-Open_Platypus_and_ccp_2.6w-3_epoch on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-29T07:23:25.462349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6b82a040dfef7943a4f7b6c3ca24d125e085c062
|
# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Mikivis/gpt2-large-lora-sft2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Mikivis/gpt2-large-lora-sft2](https://huggingface.co/Mikivis/gpt2-large-lora-sft2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T04:22:22.884901](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft2/blob/main/results_2023-10-28T04-22-22.884901.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0024119127516778523,
"em_stderr": 0.0005023380498893326,
"f1": 0.08311556208053691,
"f1_stderr": 0.00173629879963612,
"acc": 0.26835043409629045,
"acc_stderr": 0.00700728922942163
},
"harness|drop|3": {
"em": 0.0024119127516778523,
"em_stderr": 0.0005023380498893326,
"f1": 0.08311556208053691,
"f1_stderr": 0.00173629879963612
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5367008681925809,
"acc_stderr": 0.01401457845884326
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft2
|
[
"region:us"
] |
2023-09-11T16:29:31+00:00
|
{"pretty_name": "Evaluation run of Mikivis/gpt2-large-lora-sft2", "dataset_summary": "Dataset automatically created during the evaluation run of model [Mikivis/gpt2-large-lora-sft2](https://huggingface.co/Mikivis/gpt2-large-lora-sft2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T04:22:22.884901](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft2/blob/main/results_2023-10-28T04-22-22.884901.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0024119127516778523,\n \"em_stderr\": 0.0005023380498893326,\n \"f1\": 0.08311556208053691,\n \"f1_stderr\": 0.00173629879963612,\n \"acc\": 0.26835043409629045,\n \"acc_stderr\": 0.00700728922942163\n },\n \"harness|drop|3\": {\n \"em\": 0.0024119127516778523,\n \"em_stderr\": 0.0005023380498893326,\n \"f1\": 0.08311556208053691,\n \"f1_stderr\": 0.00173629879963612\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5367008681925809,\n \"acc_stderr\": 0.01401457845884326\n }\n}\n```", "repo_url": "https://huggingface.co/Mikivis/gpt2-large-lora-sft2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T04_22_22.884901", "path": ["**/details_harness|drop|3_2023-10-28T04-22-22.884901.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T04-22-22.884901.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T04_22_22.884901", "path": ["**/details_harness|gsm8k|5_2023-10-28T04-22-22.884901.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T04-22-22.884901.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-29-20.657101.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-29-20.657101.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-29-20.657101.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T04_22_22.884901", "path": ["**/details_harness|winogrande|5_2023-10-28T04-22-22.884901.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T04-22-22.884901.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_29_20.657101", "path": ["results_2023-09-11T17-29-20.657101.parquet"]}, {"split": "2023_10_28T04_22_22.884901", "path": ["results_2023-10-28T04-22-22.884901.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T04-22-22.884901.parquet"]}]}]}
|
2023-10-28T03:22:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft2 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T04:22:22.884901(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T04:22:22.884901(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T04:22:22.884901(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Mikivis/gpt2-large-lora-sft2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/gpt2-large-lora-sft2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T04:22:22.884901(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6f1b54cfb0c2885e01a4ebeebde8e1e9ee3ec64c
|
# Dataset Card for Evaluation run of TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-GPTQ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T21:00:02.304492](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-GPTQ/blob/main/results_2023-10-28T21-00-02.304492.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.22158137583892618,
"em_stderr": 0.004253171428083824,
"f1": 0.28616296140939684,
"f1_stderr": 0.004276937020149761,
"acc": 0.3751559533333772,
"acc_stderr": 0.007270592555507228
},
"harness|drop|3": {
"em": 0.22158137583892618,
"em_stderr": 0.004253171428083824,
"f1": 0.28616296140939684,
"f1_stderr": 0.004276937020149761
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022544783
},
"harness|winogrande|5": {
"acc": 0.7434885556432518,
"acc_stderr": 0.012273648008759979
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-GPTQ
|
[
"region:us"
] |
2023-09-11T16:32:26+00:00
|
{"pretty_name": "Evaluation run of TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-GPTQ\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T21:00:02.304492](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-GPTQ/blob/main/results_2023-10-28T21-00-02.304492.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.22158137583892618,\n \"em_stderr\": 0.004253171428083824,\n \"f1\": 0.28616296140939684,\n \"f1_stderr\": 0.004276937020149761,\n \"acc\": 0.3751559533333772,\n \"acc_stderr\": 0.007270592555507228\n },\n \"harness|drop|3\": {\n \"em\": 0.22158137583892618,\n \"em_stderr\": 0.004253171428083824,\n \"f1\": 0.28616296140939684,\n \"f1_stderr\": 0.004276937020149761\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.0022675371022544783\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7434885556432518,\n \"acc_stderr\": 0.012273648008759979\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T21_00_02.304492", "path": ["**/details_harness|drop|3_2023-10-28T21-00-02.304492.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T21-00-02.304492.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T21_00_02.304492", "path": ["**/details_harness|gsm8k|5_2023-10-28T21-00-02.304492.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T21-00-02.304492.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-08.880546.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-32-08.880546.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-32-08.880546.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T21_00_02.304492", "path": ["**/details_harness|winogrande|5_2023-10-28T21-00-02.304492.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T21-00-02.304492.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_32_08.880546", "path": ["results_2023-09-11T17-32-08.880546.parquet"]}, {"split": "2023_10_28T21_00_02.304492", "path": ["results_2023-10-28T21-00-02.304492.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T21-00-02.304492.parquet"]}]}]}
|
2023-10-28T20:00:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T21:00:02.304492(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T21:00:02.304492(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T21:00:02.304492(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
180,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T21:00:02.304492(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
11adbdbcc8375696e713bbbd8c3c55573b338b2a
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T10:09:09.796535](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified4/blob/main/results_2023-10-23T10-09-09.796535.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.22713926174496643,
"em_stderr": 0.004290781297690954,
"f1": 0.2716809983221478,
"f1_stderr": 0.004317738520761278,
"acc": 0.33976344759040505,
"acc_stderr": 0.006940874719140418
},
"harness|drop|3": {
"em": 0.22713926174496643,
"em_stderr": 0.004290781297690954,
"f1": 0.2716809983221478,
"f1_stderr": 0.004317738520761278
},
"harness|gsm8k|5": {
"acc": 0.000758150113722517,
"acc_stderr": 0.0007581501137225188
},
"harness|winogrande|5": {
"acc": 0.6787687450670876,
"acc_stderr": 0.013123599324558317
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified4
|
[
"region:us"
] |
2023-09-11T16:33:15+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T10:09:09.796535](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco-modified4/blob/main/results_2023-10-23T10-09-09.796535.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.22713926174496643,\n \"em_stderr\": 0.004290781297690954,\n \"f1\": 0.2716809983221478,\n \"f1_stderr\": 0.004317738520761278,\n \"acc\": 0.33976344759040505,\n \"acc_stderr\": 0.006940874719140418\n },\n \"harness|drop|3\": {\n \"em\": 0.22713926174496643,\n \"em_stderr\": 0.004290781297690954,\n \"f1\": 0.2716809983221478,\n \"f1_stderr\": 0.004317738520761278\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \"acc_stderr\": 0.0007581501137225188\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6787687450670876,\n \"acc_stderr\": 0.013123599324558317\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T10_09_09.796535", "path": ["**/details_harness|drop|3_2023-10-23T10-09-09.796535.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T10-09-09.796535.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T10_09_09.796535", "path": ["**/details_harness|gsm8k|5_2023-10-23T10-09-09.796535.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T10-09-09.796535.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-59.033048.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-32-59.033048.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-32-59.033048.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T10_09_09.796535", "path": ["**/details_harness|winogrande|5_2023-10-23T10-09-09.796535.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T10-09-09.796535.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_32_59.033048", "path": ["results_2023-09-11T17-32-59.033048.parquet"]}, {"split": "2023_10_23T10_09_09.796535", "path": ["results_2023-10-23T10-09-09.796535.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T10-09-09.796535.parquet"]}]}]}
|
2023-10-23T09:09:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T10:09:09.796535(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T10:09:09.796535(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T10:09:09.796535(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
30,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco-modified4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T10:09:09.796535(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b4b667021d2356c609088cb07a4a5664775d3ad9
|
# Dataset Card for Evaluation run of Azure99/blossom-v2-llama2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Azure99/blossom-v2-llama2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Azure99/blossom-v2-llama2-7b](https://huggingface.co/Azure99/blossom-v2-llama2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v2-llama2-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T10:27:00.906454](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v2-llama2-7b/blob/main/results_2023-10-25T10-27-00.906454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.35192953020134227,
"em_stderr": 0.004890785574074548,
"f1": 0.4060790687919473,
"f1_stderr": 0.004773085782687634,
"acc": 0.3956260064038852,
"acc_stderr": 0.009074017772494654
},
"harness|drop|3": {
"em": 0.35192953020134227,
"em_stderr": 0.004890785574074548,
"f1": 0.4060790687919473,
"f1_stderr": 0.004773085782687634
},
"harness|gsm8k|5": {
"acc": 0.047763457164518575,
"acc_stderr": 0.005874387536229319
},
"harness|winogrande|5": {
"acc": 0.7434885556432518,
"acc_stderr": 0.012273648008759989
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Azure99__blossom-v2-llama2-7b
|
[
"region:us"
] |
2023-09-11T16:39:39+00:00
|
{"pretty_name": "Evaluation run of Azure99/blossom-v2-llama2-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Azure99/blossom-v2-llama2-7b](https://huggingface.co/Azure99/blossom-v2-llama2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v2-llama2-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T10:27:00.906454](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v2-llama2-7b/blob/main/results_2023-10-25T10-27-00.906454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.35192953020134227,\n \"em_stderr\": 0.004890785574074548,\n \"f1\": 0.4060790687919473,\n \"f1_stderr\": 0.004773085782687634,\n \"acc\": 0.3956260064038852,\n \"acc_stderr\": 0.009074017772494654\n },\n \"harness|drop|3\": {\n \"em\": 0.35192953020134227,\n \"em_stderr\": 0.004890785574074548,\n \"f1\": 0.4060790687919473,\n \"f1_stderr\": 0.004773085782687634\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.047763457164518575,\n \"acc_stderr\": 0.005874387536229319\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7434885556432518,\n \"acc_stderr\": 0.012273648008759989\n }\n}\n```", "repo_url": "https://huggingface.co/Azure99/blossom-v2-llama2-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T10_27_00.906454", "path": ["**/details_harness|drop|3_2023-10-25T10-27-00.906454.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T10-27-00.906454.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T10_27_00.906454", "path": ["**/details_harness|gsm8k|5_2023-10-25T10-27-00.906454.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T10-27-00.906454.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-39-22.579303.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-39-22.579303.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-39-22.579303.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T10_27_00.906454", "path": ["**/details_harness|winogrande|5_2023-10-25T10-27-00.906454.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T10-27-00.906454.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_39_22.579303", "path": ["results_2023-09-11T17-39-22.579303.parquet"]}, {"split": "2023_10_25T10_27_00.906454", "path": ["results_2023-10-25T10-27-00.906454.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T10-27-00.906454.parquet"]}]}]}
|
2023-10-25T09:27:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Azure99/blossom-v2-llama2-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Azure99/blossom-v2-llama2-7b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T10:27:00.906454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Azure99/blossom-v2-llama2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v2-llama2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T10:27:00.906454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Azure99/blossom-v2-llama2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v2-llama2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T10:27:00.906454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Azure99/blossom-v2-llama2-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v2-llama2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T10:27:00.906454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
28a57c89e686beffabdda75ac8c0ca61bae71dda
|
# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16](https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T22:35:24.865174](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16/blob/main/results_2023-10-23T22-35-24.865174.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.01782718120805369,
"em_stderr": 0.0013551112361429815,
"f1": 0.08924706375838878,
"f1_stderr": 0.0019471028162232693,
"acc": 0.42603967392962905,
"acc_stderr": 0.00977820694915367
},
"harness|drop|3": {
"em": 0.01782718120805369,
"em_stderr": 0.0013551112361429815,
"f1": 0.08924706375838878,
"f1_stderr": 0.0019471028162232693
},
"harness|gsm8k|5": {
"acc": 0.08491281273692192,
"acc_stderr": 0.007678212824450797
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856544
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16
|
[
"region:us"
] |
2023-09-11T16:48:30+00:00
|
{"pretty_name": "Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16", "dataset_summary": "Dataset automatically created during the evaluation run of model [dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16](https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T22:35:24.865174](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16/blob/main/results_2023-10-23T22-35-24.865174.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.01782718120805369,\n \"em_stderr\": 0.0013551112361429815,\n \"f1\": 0.08924706375838878,\n \"f1_stderr\": 0.0019471028162232693,\n \"acc\": 0.42603967392962905,\n \"acc_stderr\": 0.00977820694915367\n },\n \"harness|drop|3\": {\n \"em\": 0.01782718120805369,\n \"em_stderr\": 0.0013551112361429815,\n \"f1\": 0.08924706375838878,\n \"f1_stderr\": 0.0019471028162232693\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08491281273692192,\n \"acc_stderr\": 0.007678212824450797\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856544\n }\n}\n```", "repo_url": "https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T22_35_24.865174", "path": ["**/details_harness|drop|3_2023-10-23T22-35-24.865174.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T22-35-24.865174.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T22_35_24.865174", "path": ["**/details_harness|gsm8k|5_2023-10-23T22-35-24.865174.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T22-35-24.865174.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-48-14.644615.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-48-14.644615.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-48-14.644615.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T22_35_24.865174", "path": ["**/details_harness|winogrande|5_2023-10-23T22-35-24.865174.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T22-35-24.865174.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_48_14.644615", "path": ["results_2023-09-11T17-48-14.644615.parquet"]}, {"split": "2023_10_23T22_35_24.865174", "path": ["results_2023-10-23T22-35-24.865174.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T22-35-24.865174.parquet"]}]}]}
|
2023-10-23T21:35:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T22:35:24.865174(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T22:35:24.865174(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T22:35:24.865174(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
38,
31,
186,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T22:35:24.865174(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e92fe06c06a7884422bbca9a98b9512621358ee8
|
# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16](https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T11:55:30.067408](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16/blob/main/results_2023-10-25T11-55-30.067408.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826867,
"f1": 0.06479446308724818,
"f1_stderr": 0.0014092381006987735,
"acc": 0.4316480101102639,
"acc_stderr": 0.010106933446946506
},
"harness|drop|3": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826867,
"f1": 0.06479446308724818,
"f1_stderr": 0.0014092381006987735
},
"harness|gsm8k|5": {
"acc": 0.10007581501137225,
"acc_stderr": 0.00826627452868562
},
"harness|winogrande|5": {
"acc": 0.7632202052091555,
"acc_stderr": 0.01194759236520739
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
|
[
"region:us"
] |
2023-09-11T16:51:13+00:00
|
{"pretty_name": "Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16", "dataset_summary": "Dataset automatically created during the evaluation run of model [dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16](https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T11:55:30.067408](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16/blob/main/results_2023-10-25T11-55-30.067408.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.0004191330178826867,\n \"f1\": 0.06479446308724818,\n \"f1_stderr\": 0.0014092381006987735,\n \"acc\": 0.4316480101102639,\n \"acc_stderr\": 0.010106933446946506\n },\n \"harness|drop|3\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.0004191330178826867,\n \"f1\": 0.06479446308724818,\n \"f1_stderr\": 0.0014092381006987735\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10007581501137225,\n \"acc_stderr\": 0.00826627452868562\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.01194759236520739\n }\n}\n```", "repo_url": "https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T11_55_30.067408", "path": ["**/details_harness|drop|3_2023-10-25T11-55-30.067408.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T11-55-30.067408.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T11_55_30.067408", "path": ["**/details_harness|gsm8k|5_2023-10-25T11-55-30.067408.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T11-55-30.067408.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-50-57.787560.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T17-50-57.787560.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T11_55_30.067408", "path": ["**/details_harness|winogrande|5_2023-10-25T11-55-30.067408.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T11-55-30.067408.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T17_50_57.787560", "path": ["results_2023-09-11T17-50-57.787560.parquet"]}, {"split": "2023_10_25T11_55_30.067408", "path": ["results_2023-10-25T11-55-30.067408.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T11-55-30.067408.parquet"]}]}]}
|
2023-10-25T10:55:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T11:55:30.067408(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T11:55:30.067408(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T11:55:30.067408(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
39,
31,
187,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T11:55:30.067408(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e3e554612bebf911189c0b5c08bcc61a4964f9ef
|
# Dataset Card for "processed_Cosmic_dataset_V2_inst_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
slaqrichi/processed_Cosmic_dataset_V2_inst_format
|
[
"region:us"
] |
2023-09-11T16:51:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86815, "num_examples": 95}], "download_size": 0, "dataset_size": 86815}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-12T08:55:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "processed_Cosmic_dataset_V2_inst_format"
More Information needed
|
[
"# Dataset Card for \"processed_Cosmic_dataset_V2_inst_format\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_Cosmic_dataset_V2_inst_format\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_Cosmic_dataset_V2_inst_format\"\n\nMore Information needed"
] |
4b94123abce1d0b40d20641cc98548c5c29ee8e6
|
# Dataset of kahili (Pokémon)
This is the dataset of kahili (Pokémon), containing 229 images and their tags.
The core tags of this character are `long_hair, ahoge, blue_hair, visor_cap, mole, mole_under_eye, breasts, light_blue_hair, blue_eyes, blue_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 229 | 211.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 229 | 133.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 524 | 266.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 229 | 194.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 524 | 353.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kahili_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, blush, hetero, short_sleeves, spread_legs, 1boy, nipples, penis, sex, skirt, socks, striped_shirt, vaginal, cum_in_pussy, girl_on_top, gloves, open_mouth, shirt_lift, sweat, bar_censor, large_breasts, medium_breasts, navel, no_bra, pubic_hair, squatting, straddling, underwear |
| 1 | 25 |  |  |  |  |  | 1girl, collared_shirt, short_sleeves, striped_shirt, closed_mouth, golf_club, holding, kneehighs, shoes, solo, full_body, blue_skirt, simple_background, looking_at_viewer, buttons, white_background, miniskirt, pencil_skirt, standing, white_footwear, frown, v-shaped_eyebrows, medium_breasts, squatting, blush, hat, white_gloves |
| 2 | 5 |  |  |  |  |  | 1girl, closed_mouth, collared_shirt, golf_club, holding, short_sleeves, solo, striped_shirt, blue_skirt, white_background, white_gloves, frown, looking_at_viewer, simple_background, >:(, buttons, hand_on_hip, single_glove, standing |
| 3 | 6 |  |  |  |  |  | 1girl, collared_shirt, short_sleeves, simple_background, solo, upper_body, closed_mouth, striped_shirt, white_background, buttons, looking_at_viewer, blush, eyelashes, medium_breasts, sketch |
| 4 | 6 |  |  |  |  |  | 1girl, blush, collarbone, nipples, looking_at_viewer, pussy, day, large_breasts, outdoors, solo, sweat, censored, cloud, completely_nude, golf_club, grass, navel, open_mouth, sky, socks, squatting, very_long_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | hetero | short_sleeves | spread_legs | 1boy | nipples | penis | sex | skirt | socks | striped_shirt | vaginal | cum_in_pussy | girl_on_top | gloves | open_mouth | shirt_lift | sweat | bar_censor | large_breasts | medium_breasts | navel | no_bra | pubic_hair | squatting | straddling | underwear | collared_shirt | closed_mouth | golf_club | holding | kneehighs | shoes | solo | full_body | blue_skirt | simple_background | looking_at_viewer | buttons | white_background | miniskirt | pencil_skirt | standing | white_footwear | frown | v-shaped_eyebrows | hat | white_gloves | >:( | hand_on_hip | single_glove | upper_body | eyelashes | sketch | collarbone | pussy | day | outdoors | censored | cloud | completely_nude | grass | sky | very_long_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------|:----------------|:--------------|:-------|:----------|:--------|:------|:--------|:--------|:----------------|:----------|:---------------|:--------------|:---------|:-------------|:-------------|:--------|:-------------|:----------------|:-----------------|:--------|:---------|:-------------|:------------|:-------------|:------------|:-----------------|:---------------|:------------|:----------|:------------|:--------|:-------|:------------|:-------------|:--------------------|:--------------------|:----------|:-------------------|:------------|:---------------|:-----------|:-----------------|:--------|:--------------------|:------|:---------------|:------|:--------------|:---------------|:-------------|:------------|:---------|:-------------|:--------|:------|:-----------|:-----------|:--------|:------------------|:--------|:------|:-----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 25 |  |  |  |  |  | X | X | | X | | | | | | | | X | | | | | | | | | | X | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | | | X | | X | X | X | X | X | | | X | | X | | | X | X | X | X | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | X | | | | | | | | X | | | | | | | | | | X | | | | | | | X | X | | | | | X | | | X | X | X | X | | | | | | | | | | | | X | X | X | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | | | | | X | | | | X | | | | | | X | | X | | X | | X | | | X | | | | | X | | | | X | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kahili_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T16:58:02+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T19:15:26+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kahili (Pokémon)
===========================
This is the dataset of kahili (Pokémon), containing 229 images and their tags.
The core tags of this character are 'long\_hair, ahoge, blue\_hair, visor\_cap, mole, mole\_under\_eye, breasts, light\_blue\_hair, blue\_eyes, blue\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ce8ebfe4d20611096042eaf95480fb732e937823
|
# Dataset Card for Evaluation run of StudentLLM/Alpagasus-2-13b-QLoRA-merged
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/StudentLLM/Alpagasus-2-13b-QLoRA-merged
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [StudentLLM/Alpagasus-2-13b-QLoRA-merged](https://huggingface.co/StudentLLM/Alpagasus-2-13b-QLoRA-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_StudentLLM__Alpagasus-2-13b-QLoRA-merged",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T20:12:36.073167](https://huggingface.co/datasets/open-llm-leaderboard/details_StudentLLM__Alpagasus-2-13b-QLoRA-merged/blob/main/results_2023-10-27T20-12-36.073167.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001572986577181208,
"em_stderr": 0.000405845113241773,
"f1": 0.06265939597315423,
"f1_stderr": 0.001378921060077413,
"acc": 0.4424643648503177,
"acc_stderr": 0.010216085204246378
},
"harness|drop|3": {
"em": 0.001572986577181208,
"em_stderr": 0.000405845113241773,
"f1": 0.06265939597315423,
"f1_stderr": 0.001378921060077413
},
"harness|gsm8k|5": {
"acc": 0.11144806671721001,
"acc_stderr": 0.008668021353794427
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698329
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_StudentLLM__Alpagasus-2-13b-QLoRA-merged
|
[
"region:us"
] |
2023-09-11T17:18:37+00:00
|
{"pretty_name": "Evaluation run of StudentLLM/Alpagasus-2-13b-QLoRA-merged", "dataset_summary": "Dataset automatically created during the evaluation run of model [StudentLLM/Alpagasus-2-13b-QLoRA-merged](https://huggingface.co/StudentLLM/Alpagasus-2-13b-QLoRA-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_StudentLLM__Alpagasus-2-13b-QLoRA-merged\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T20:12:36.073167](https://huggingface.co/datasets/open-llm-leaderboard/details_StudentLLM__Alpagasus-2-13b-QLoRA-merged/blob/main/results_2023-10-27T20-12-36.073167.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.000405845113241773,\n \"f1\": 0.06265939597315423,\n \"f1_stderr\": 0.001378921060077413,\n \"acc\": 0.4424643648503177,\n \"acc_stderr\": 0.010216085204246378\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.000405845113241773,\n \"f1\": 0.06265939597315423,\n \"f1_stderr\": 0.001378921060077413\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11144806671721001,\n \"acc_stderr\": 0.008668021353794427\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698329\n }\n}\n```", "repo_url": "https://huggingface.co/StudentLLM/Alpagasus-2-13b-QLoRA-merged", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|arc:challenge|25_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|arc:challenge|25_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T10_45_15.535939", "path": ["**/details_harness|drop|3_2023-10-26T10-45-15.535939.parquet"]}, {"split": "2023_10_27T20_12_36.073167", "path": ["**/details_harness|drop|3_2023-10-27T20-12-36.073167.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T20-12-36.073167.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T10_45_15.535939", "path": ["**/details_harness|gsm8k|5_2023-10-26T10-45-15.535939.parquet"]}, {"split": "2023_10_27T20_12_36.073167", "path": ["**/details_harness|gsm8k|5_2023-10-27T20-12-36.073167.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T20-12-36.073167.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hellaswag|10_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hellaswag|10_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T18-18-21.353761.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T21-35-59.433556.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T21-35-59.433556.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T21-35-59.433556.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T10_45_15.535939", "path": ["**/details_harness|winogrande|5_2023-10-26T10-45-15.535939.parquet"]}, {"split": "2023_10_27T20_12_36.073167", "path": ["**/details_harness|winogrande|5_2023-10-27T20-12-36.073167.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T20-12-36.073167.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T18_18_21.353761", "path": ["results_2023-09-11T18-18-21.353761.parquet"]}, {"split": "2023_09_21T21_35_59.433556", "path": ["results_2023-09-21T21-35-59.433556.parquet"]}, {"split": "2023_10_26T10_45_15.535939", "path": ["results_2023-10-26T10-45-15.535939.parquet"]}, {"split": "2023_10_27T20_12_36.073167", "path": ["results_2023-10-27T20-12-36.073167.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T20-12-36.073167.parquet"]}]}]}
|
2023-10-27T19:12:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of StudentLLM/Alpagasus-2-13b-QLoRA-merged
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model StudentLLM/Alpagasus-2-13b-QLoRA-merged on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T20:12:36.073167(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of StudentLLM/Alpagasus-2-13b-QLoRA-merged",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model StudentLLM/Alpagasus-2-13b-QLoRA-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T20:12:36.073167(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of StudentLLM/Alpagasus-2-13b-QLoRA-merged",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model StudentLLM/Alpagasus-2-13b-QLoRA-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T20:12:36.073167(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of StudentLLM/Alpagasus-2-13b-QLoRA-merged## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model StudentLLM/Alpagasus-2-13b-QLoRA-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T20:12:36.073167(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9c2d704dc93041e0e00db7940f6fcee31c8dee27
|
# Dataset Card for "cache_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SodaDQ/cache_test
|
[
"region:us"
] |
2023-09-11T17:23:07+00:00
|
{"dataset_info": {"features": [{"name": "sodacl", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2075, "num_examples": 5}, {"name": "test", "num_bytes": 145801, "num_examples": 308}], "download_size": 74408, "dataset_size": 147876}}
|
2023-09-11T17:31:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cache_test"
More Information needed
|
[
"# Dataset Card for \"cache_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cache_test\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cache_test\"\n\nMore Information needed"
] |
644e3d5209ab161d376eb2adfb7abf38fc55d7e3
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE1_17w-r16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T08:32:40.202592](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE1_17w-r16/blob/main/results_2023-10-25T08-32-40.202592.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.12458053691275167,
"em_stderr": 0.00338199412967585,
"f1": 0.17434458892617408,
"f1_stderr": 0.0034544534531551316,
"acc": 0.4538521744906123,
"acc_stderr": 0.010558058935343523
},
"harness|drop|3": {
"em": 0.12458053691275167,
"em_stderr": 0.00338199412967585,
"f1": 0.17434458892617408,
"f1_stderr": 0.0034544534531551316
},
"harness|gsm8k|5": {
"acc": 0.133434420015163,
"acc_stderr": 0.009366491609784486
},
"harness|winogrande|5": {
"acc": 0.7742699289660616,
"acc_stderr": 0.011749626260902559
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE1_17w-r16
|
[
"region:us"
] |
2023-09-11T17:33:51+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE1_17w-r16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T08:32:40.202592](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE1_17w-r16/blob/main/results_2023-10-25T08-32-40.202592.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.12458053691275167,\n \"em_stderr\": 0.00338199412967585,\n \"f1\": 0.17434458892617408,\n \"f1_stderr\": 0.0034544534531551316,\n \"acc\": 0.4538521744906123,\n \"acc_stderr\": 0.010558058935343523\n },\n \"harness|drop|3\": {\n \"em\": 0.12458053691275167,\n \"em_stderr\": 0.00338199412967585,\n \"f1\": 0.17434458892617408,\n \"f1_stderr\": 0.0034544534531551316\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.133434420015163,\n \"acc_stderr\": 0.009366491609784486\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7742699289660616,\n \"acc_stderr\": 0.011749626260902559\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|arc:challenge|25_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T08_32_40.202592", "path": ["**/details_harness|drop|3_2023-10-25T08-32-40.202592.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T08-32-40.202592.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T08_32_40.202592", "path": ["**/details_harness|gsm8k|5_2023-10-25T08-32-40.202592.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T08-32-40.202592.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hellaswag|10_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-11T18-33-35.889629.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T18-33-35.889629.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-11T18-33-35.889629.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T08_32_40.202592", "path": ["**/details_harness|winogrande|5_2023-10-25T08-32-40.202592.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T08-32-40.202592.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_11T18_33_35.889629", "path": ["results_2023-09-11T18-33-35.889629.parquet"]}, {"split": "2023_10_25T08_32_40.202592", "path": ["results_2023-10-25T08-32-40.202592.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T08-32-40.202592.parquet"]}]}]}
|
2023-10-25T07:32:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T08:32:40.202592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T08:32:40.202592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T08:32:40.202592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
31,
31,
179,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T08:32:40.202592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7413e88ec04a71d889d0beb0227b13c014f92bfc
|
# Dataset of shimamura_uzuki/島村卯月/시마무라우즈키 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of shimamura_uzuki/島村卯月/시마무라우즈키 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `brown_hair, long_hair, one_side_up, brown_eyes, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 686.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimamura_uzuki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 392.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimamura_uzuki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1254 | 858.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimamura_uzuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 607.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimamura_uzuki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1254 | 1.20 GiB | [Download](https://huggingface.co/datasets/CyberHarem/shimamura_uzuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shimamura_uzuki_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, solo, hair_bow, midriff, blush, crop_top, looking_at_viewer, navel, open_mouth, :d, breasts, short_sleeves, pink_skirt, plaid, white_background, simple_background |
| 1 | 15 |  |  |  |  |  | 1girl, blazer, school_uniform, solo, looking_at_viewer, open_mouth, blush, skirt, cherry_blossoms, flower, :d |
| 2 | 5 |  |  |  |  |  | 1girl, :d, blazer, blush, looking_at_viewer, open_mouth, school_uniform, solo, plaid_skirt, hair_bow, heart, love_letter |
| 3 | 11 |  |  |  |  |  | 1girl, blazer, school_uniform, smile, solo, one_eye_closed, open_mouth, ;d, blush, looking_at_viewer, plaid_skirt, half_updo |
| 4 | 13 |  |  |  |  |  | 1girl, open_mouth, solo, blazer, blush, school_uniform, microphone, :d, closed_eyes, crying_with_eyes_open |
| 5 | 16 |  |  |  |  |  | 1girl, long_sleeves, red_bowtie, school_uniform, solo, brown_jacket, blazer, blush, collared_shirt, bangs, looking_at_viewer, white_shirt, open_mouth, plaid_skirt, red_skirt, simple_background, white_background, pleated_skirt, :d, cowboy_shot, miniskirt, upper_body |
| 6 | 13 |  |  |  |  |  | 1girl, solo, mini_crown, white_gloves, looking_at_viewer, open_mouth, :d, epaulettes, white_thighhighs, blush, breasts, dress, double_v, white_background |
| 7 | 7 |  |  |  |  |  | 1girl, dress, solo, necklace, open_mouth, hair_bow, :d, half_updo, polka_dot, tiara |
| 8 | 10 |  |  |  |  |  | 1girl, smile, solo, enmaided, open_mouth, looking_at_viewer, maid_headdress, maid_apron, short_sleeves, blush, heart, puffy_sleeves, white_thighhighs |
| 9 | 5 |  |  |  |  |  | 1girl, bangs, blush, enmaided, frilled_apron, looking_at_viewer, maid_apron, maid_headdress, puffy_short_sleeves, solo, white_apron, :d, black_dress, open_mouth, neck_ribbon, ribbon_trim, simple_background, very_long_hair, medium_breasts, white_background |
| 10 | 10 |  |  |  |  |  | 1girl, solo, looking_at_viewer, sailor_collar, see-through, blush, medium_breasts, navel, sailor_hat, striped_bikini, wrist_cuffs, open_mouth, smile, hair_bow, red_bow, bikini_under_clothes, midriff, necklace, shirt, simple_background, wet, white_background, white_skirt, bangs, crop_top_overhang, pleated_skirt, shiny, striped_thighhighs, white_headwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | hair_bow | midriff | blush | crop_top | looking_at_viewer | navel | open_mouth | :d | breasts | short_sleeves | pink_skirt | plaid | white_background | simple_background | blazer | school_uniform | skirt | cherry_blossoms | flower | plaid_skirt | heart | love_letter | smile | one_eye_closed | ;d | half_updo | microphone | closed_eyes | crying_with_eyes_open | long_sleeves | red_bowtie | brown_jacket | collared_shirt | bangs | white_shirt | red_skirt | pleated_skirt | cowboy_shot | miniskirt | upper_body | mini_crown | white_gloves | epaulettes | white_thighhighs | dress | double_v | necklace | polka_dot | tiara | enmaided | maid_headdress | maid_apron | puffy_sleeves | frilled_apron | puffy_short_sleeves | white_apron | black_dress | neck_ribbon | ribbon_trim | very_long_hair | medium_breasts | sailor_collar | see-through | sailor_hat | striped_bikini | wrist_cuffs | red_bow | bikini_under_clothes | shirt | wet | white_skirt | crop_top_overhang | shiny | striped_thighhighs | white_headwear |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-------|:-----------|:----------|:--------|:-----------|:--------------------|:--------|:-------------|:-----|:----------|:----------------|:-------------|:--------|:-------------------|:--------------------|:---------|:-----------------|:--------|:------------------|:---------|:--------------|:--------|:--------------|:--------|:-----------------|:-----|:------------|:-------------|:--------------|:------------------------|:---------------|:-------------|:---------------|:-----------------|:--------|:--------------|:------------|:----------------|:--------------|:------------|:-------------|:-------------|:---------------|:-------------|:-------------------|:--------|:-----------|:-----------|:------------|:--------|:-----------|:-----------------|:-------------|:----------------|:----------------|:----------------------|:--------------|:--------------|:--------------|:--------------|:-----------------|:-----------------|:----------------|:--------------|:-------------|:-----------------|:--------------|:----------|:-----------------------|:--------|:------|:--------------|:--------------------|:--------|:---------------------|:-----------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | | | X | | X | | X | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | X | | X | | X | X | | | | | | | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | X | | | X | | X | | X | | | | | | | | X | X | | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 13 |  |  |  |  |  | X | X | | | X | | | | X | X | | | | | | | X | X | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 16 |  |  |  |  |  | X | X | | | X | | X | | X | X | | | | | X | X | X | X | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 13 |  |  |  |  |  | X | X | | | X | | X | | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | X | X | | | | | | X | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 10 |  |  |  |  |  | X | X | | | X | | X | | X | | | X | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | X | | | X | | X | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 10 | 10 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | | | | | X | X | | | | | | | | | X | | | | | | | | | | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/shimamura_uzuki_idolmastercinderellagirls
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-11T17:35:49+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T09:28:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shimamura\_uzuki/島村卯月/시마무라우즈키 (THE iDOLM@STER: Cinderella Girls)
===========================================================================
This is the dataset of shimamura\_uzuki/島村卯月/시마무라우즈키 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, one\_side\_up, brown\_eyes, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d5c45e38b17094500ffc41ee96343305da0343aa
|
# Dataset Card for "91df01ab"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/91df01ab
|
[
"region:us"
] |
2023-09-11T17:37:45+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1340, "dataset_size": 184}}
|
2023-09-11T17:37:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "91df01ab"
More Information needed
|
[
"# Dataset Card for \"91df01ab\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"91df01ab\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"91df01ab\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.