sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
43651fdd2ba209f8e20856cc71481d77a8ffb412
# Dataset Card for Dataset Name The Active Template Regression (ATR) dataset comprises 18 semantic category labels, including face, sunglasses, hat, scarf, hair, upper clothes, left arm, right arm, belt, pants, left leg, right leg, skirt, left shoe, right shoe, bag, dress, and background. A total of 17,700 images were incorporated into the ATR dataset. 16,700 images were designated for training, and 1,000 for testing. - **Curated by:** Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan - **Shared by:** Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan - **License:** MIT # Dataset Sources - **Repository:** https://github.com/lemondan/HumanParsing-Dataset - **Paper:** Deep Human Parsing with Active Template Regression # Human Parsing Labels - 0: **background** - 1: **hat** - 2: **hair** - 3: **sunglasses** - 4: **upperclothes** - 5: **skirt** - 6: **pants** - 7: **dress** - 8: **belt** - 9: **leftshoe** - 10: **rightshoe** - 11: **face** - 12: **leftleg** - 13: **rightleg** - 14: **leftarm** - 15: **rightarm** - 16: **bag** - 17: **scarf** # Uses Semantic segmentation, and more specifically, human body parsing. # Dataset Card Authors Christian Kotait **BibTeX:** @article{liang2015deep, title={Deep human parsing with active template regression}, author={Liang, Xiaodan and Liu, Si and Shen, Xiaohui and Yang, Jianchao and Liu, Luoqi and Dong, Jian and Lin, Liang and Yan, Shuicheng}, journal={IEEE transactions on pattern analysis and machine intelligence}, volume={37}, number={12}, pages={2402--2414}, year={2015}, publisher={IEEE} }
ckotait/ATRDataset
[ "region:us" ]
2023-11-23T16:20:10+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 674327851.666, "num_examples": 16706}, {"name": "validation", "num_bytes": 46935738.0, "num_examples": 1000}, {"name": "test", "num_bytes": 16859858.0, "num_examples": 200}], "download_size": 813600043, "dataset_size": 738123447.666}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-12-04T15:01:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name The Active Template Regression (ATR) dataset comprises 18 semantic category labels, including face, sunglasses, hat, scarf, hair, upper clothes, left arm, right arm, belt, pants, left leg, right leg, skirt, left shoe, right shoe, bag, dress, and background. A total of 17,700 images were incorporated into the ATR dataset. 16,700 images were designated for training, and 1,000 for testing. - Curated by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan - Shared by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan - License: MIT # Dataset Sources - Repository: URL - Paper: Deep Human Parsing with Active Template Regression # Human Parsing Labels - 0: background - 1: hat - 2: hair - 3: sunglasses - 4: upperclothes - 5: skirt - 6: pants - 7: dress - 8: belt - 9: leftshoe - 10: rightshoe - 11: face - 12: leftleg - 13: rightleg - 14: leftarm - 15: rightarm - 16: bag - 17: scarf # Uses Semantic segmentation, and more specifically, human body parsing. # Dataset Card Authors Christian Kotait BibTeX: @article{liang2015deep, title={Deep human parsing with active template regression}, author={Liang, Xiaodan and Liu, Si and Shen, Xiaohui and Yang, Jianchao and Liu, Luoqi and Dong, Jian and Lin, Liang and Yan, Shuicheng}, journal={IEEE transactions on pattern analysis and machine intelligence}, volume={37}, number={12}, pages={2402--2414}, year={2015}, publisher={IEEE} }
[ "# Dataset Card for Dataset Name\n\nThe Active Template Regression (ATR) dataset comprises 18 semantic category labels, including face, sunglasses, hat, scarf, hair, upper clothes, left arm, right arm, belt, pants, left leg, right leg, skirt, left shoe, right shoe, bag, dress, and background. A total of 17,700 images were incorporated into the ATR dataset. 16,700 images were designated for training, and 1,000 for testing.\n\n\n- Curated by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan\n- Shared by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan\n- License: MIT", "# Dataset Sources\n\n- Repository: URL\n- Paper: Deep Human Parsing with Active Template Regression", "# Human Parsing Labels\n\n- 0: background\n- 1: hat\n- 2: hair\n- 3: sunglasses\n- 4: upperclothes\n- 5: skirt\n- 6: pants\n- 7: dress\n- 8: belt\n- 9: leftshoe\n- 10: rightshoe\n- 11: face\n- 12: leftleg\n- 13: rightleg\n- 14: leftarm\n- 15: rightarm\n- 16: bag\n- 17: scarf", "# Uses\n\nSemantic segmentation, and more specifically, human body parsing.", "# Dataset Card Authors\n\nChristian Kotait\n\nBibTeX:\n\n@article{liang2015deep,\n title={Deep human parsing with active template regression},\n author={Liang, Xiaodan and Liu, Si and Shen, Xiaohui and Yang, Jianchao and Liu, Luoqi and Dong, Jian and Lin, Liang and Yan, Shuicheng},\n journal={IEEE transactions on pattern analysis and machine intelligence},\n volume={37},\n number={12},\n pages={2402--2414},\n year={2015},\n publisher={IEEE}\n}" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\nThe Active Template Regression (ATR) dataset comprises 18 semantic category labels, including face, sunglasses, hat, scarf, hair, upper clothes, left arm, right arm, belt, pants, left leg, right leg, skirt, left shoe, right shoe, bag, dress, and background. A total of 17,700 images were incorporated into the ATR dataset. 16,700 images were designated for training, and 1,000 for testing.\n\n\n- Curated by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan\n- Shared by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan\n- License: MIT", "# Dataset Sources\n\n- Repository: URL\n- Paper: Deep Human Parsing with Active Template Regression", "# Human Parsing Labels\n\n- 0: background\n- 1: hat\n- 2: hair\n- 3: sunglasses\n- 4: upperclothes\n- 5: skirt\n- 6: pants\n- 7: dress\n- 8: belt\n- 9: leftshoe\n- 10: rightshoe\n- 11: face\n- 12: leftleg\n- 13: rightleg\n- 14: leftarm\n- 15: rightarm\n- 16: bag\n- 17: scarf", "# Uses\n\nSemantic segmentation, and more specifically, human body parsing.", "# Dataset Card Authors\n\nChristian Kotait\n\nBibTeX:\n\n@article{liang2015deep,\n title={Deep human parsing with active template regression},\n author={Liang, Xiaodan and Liu, Si and Shen, Xiaohui and Yang, Jianchao and Liu, Luoqi and Dong, Jian and Lin, Liang and Yan, Shuicheng},\n journal={IEEE transactions on pattern analysis and machine intelligence},\n volume={37},\n number={12},\n pages={2402--2414},\n year={2015},\n publisher={IEEE}\n}" ]
[ 6, 200, 23, 90, 17, 138 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\nThe Active Template Regression (ATR) dataset comprises 18 semantic category labels, including face, sunglasses, hat, scarf, hair, upper clothes, left arm, right arm, belt, pants, left leg, right leg, skirt, left shoe, right shoe, bag, dress, and background. A total of 17,700 images were incorporated into the ATR dataset. 16,700 images were designated for training, and 1,000 for testing.\n\n\n- Curated by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan\n- Shared by: Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan\n- License: MIT# Dataset Sources\n\n- Repository: URL\n- Paper: Deep Human Parsing with Active Template Regression# Human Parsing Labels\n\n- 0: background\n- 1: hat\n- 2: hair\n- 3: sunglasses\n- 4: upperclothes\n- 5: skirt\n- 6: pants\n- 7: dress\n- 8: belt\n- 9: leftshoe\n- 10: rightshoe\n- 11: face\n- 12: leftleg\n- 13: rightleg\n- 14: leftarm\n- 15: rightarm\n- 16: bag\n- 17: scarf# Uses\n\nSemantic segmentation, and more specifically, human body parsing.# Dataset Card Authors\n\nChristian Kotait\n\nBibTeX:\n\n@article{liang2015deep,\n title={Deep human parsing with active template regression},\n author={Liang, Xiaodan and Liu, Si and Shen, Xiaohui and Yang, Jianchao and Liu, Luoqi and Dong, Jian and Lin, Liang and Yan, Shuicheng},\n journal={IEEE transactions on pattern analysis and machine intelligence},\n volume={37},\n number={12},\n pages={2402--2414},\n year={2015},\n publisher={IEEE}\n}" ]
e03acf4ed208debc9f061993f1b4da1ff5865b85
# mtkinit/test_xyz_aaa Created from AIOD platform
mtkinit/mtkinit_test_xyz_aaa
[ "region:us" ]
2023-11-23T16:21:24+00:00
{"pretty_name": "mtkinit/test_xyz_aaa"}
2023-11-23T16:29:27+00:00
[]
[]
TAGS #region-us
# mtkinit/test_xyz_aaa Created from AIOD platform
[ "# mtkinit/test_xyz_aaa\nCreated from AIOD platform" ]
[ "TAGS\n#region-us \n", "# mtkinit/test_xyz_aaa\nCreated from AIOD platform" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# mtkinit/test_xyz_aaa\nCreated from AIOD platform" ]
a1717f7cfd165a6027a1b68d9031b697057b31fd
# mtkinit/andre-jo-vo-potesenie Created from AIOD platform
mtkinit/mtkinit_andre_jo_vo_potesenie
[ "region:us" ]
2023-11-23T16:30:19+00:00
{"pretty_name": "mtkinit/andre-jo-vo-potesenie"}
2023-11-23T16:30:19+00:00
[]
[]
TAGS #region-us
# mtkinit/andre-jo-vo-potesenie Created from AIOD platform
[ "# mtkinit/andre-jo-vo-potesenie\nCreated from AIOD platform" ]
[ "TAGS\n#region-us \n", "# mtkinit/andre-jo-vo-potesenie\nCreated from AIOD platform" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# mtkinit/andre-jo-vo-potesenie\nCreated from AIOD platform" ]
96ba757e93e830302880b85f42ba4765d621e8a3
This dataset comprises a collection of the most recent (up to 23 November 2023) 10K arXiv papers' metadata in cs.CL (Computation and Language). Each metadata entry has been enriched with the 'title' and 'abstract' embeddings, generated using Cohere's Embed-v3 for 'clustering'.
dcarpintero/arxiv.cs.CL.embedv3.clustering.medium
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "region:us" ]
2023-11-23T16:51:50+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "arxiv.cs.CL.embedv3.clustering.medium"}
2023-11-23T17:13:26+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
This dataset comprises a collection of the most recent (up to 23 November 2023) 10K arXiv papers' metadata in cs.CL (Computation and Language). Each metadata entry has been enriched with the 'title' and 'abstract' embeddings, generated using Cohere's Embed-v3 for 'clustering'.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n" ]
[ 41 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n" ]
62224608b02fb6643606d6305200662bee43734a
train split: - 20k documents from Wikipedia (The Pile) valid split: - 5k documents from Wikipedia (The Pile)
yurakuratov/toy_wiki
[ "region:us" ]
2023-11-23T17:19:43+00:00
{}
2023-11-23T17:25:18+00:00
[]
[]
TAGS #region-us
train split: - 20k documents from Wikipedia (The Pile) valid split: - 5k documents from Wikipedia (The Pile)
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
91a115aecec42333f751e99ce99f749199ba0908
# Dataset Card for "dataset-creator-reddit-amitheasshole" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
severo/tmp-1
[ "region:us" ]
2023-11-23T17:21:55+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "poster", "dtype": "string"}, {"name": "date_utc", "dtype": "timestamp[us]"}, {"name": "flair", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "permalink", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 250, "num_examples": 1}], "download_size": 5088, "dataset_size": 250}}
2023-11-23T17:41:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset-creator-reddit-amitheasshole" More Information needed
[ "# Dataset Card for \"dataset-creator-reddit-amitheasshole\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset-creator-reddit-amitheasshole\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset-creator-reddit-amitheasshole\"\n\nMore Information needed" ]
04cce337b3e7cd297a7895e6241cdf88b693486e
# Dataset of hoshino (Blue Archive) This is the dataset of hoshino (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 150 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 420 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 477 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 150 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 150 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 150 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 420 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 420 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 300 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 477 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 477 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/hoshino_bluearchive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-23T17:28:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-23T17:28:26+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of hoshino (Blue Archive) ================================= This is the dataset of hoshino (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
6bf577a360a55b7b6ce38c788f85f371a9824eb4
# Dataset Card for Evaluation run of habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1](https://huggingface.co/habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T17:25:53.937618](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1_public/blob/main/results_2023-11-23T17-25-53.937618.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.264322164864966, "acc_stderr": 0.031129712846210546, "acc_norm": 0.266254876290074, "acc_norm_stderr": 0.03192353802372127, "mc1": 0.23133414932680538, "mc1_stderr": 0.014761945174862671, "mc2": 0.3834755033374754, "mc2_stderr": 0.014667489374542058, "em": 0.0016778523489932886, "em_stderr": 0.00041913301788268684, "f1": 0.041514261744966606, "f1_stderr": 0.00114449364248572 }, "harness|arc:challenge|25": { "acc": 0.2986348122866894, "acc_stderr": 0.013374078615068756, "acc_norm": 0.32849829351535836, "acc_norm_stderr": 0.013724978465537378 }, "harness|hellaswag|10": { "acc": 0.44453296156144195, "acc_stderr": 0.004958983318274571, "acc_norm": 0.581557458673571, "acc_norm_stderr": 0.004922953651577688 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.26, "acc_stderr": 0.0440844002276808, "acc_norm": 0.26, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.3333333333333333, "acc_stderr": 0.04072314811876837, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.2565789473684211, "acc_stderr": 0.0355418036802569, "acc_norm": 0.2565789473684211, "acc_norm_stderr": 0.0355418036802569 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.21, "acc_stderr": 0.040936018074033256, "acc_norm": 0.21, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2339622641509434, "acc_stderr": 0.02605529690115292, "acc_norm": 0.2339622641509434, "acc_norm_stderr": 0.02605529690115292 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2638888888888889, "acc_stderr": 0.03685651095897532, "acc_norm": 0.2638888888888889, "acc_norm_stderr": 0.03685651095897532 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.23, "acc_stderr": 0.04229525846816506, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.2658959537572254, "acc_stderr": 0.03368762932259431, "acc_norm": 0.2658959537572254, "acc_norm_stderr": 0.03368762932259431 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.29411764705882354, "acc_stderr": 0.04533838195929774, "acc_norm": 0.29411764705882354, "acc_norm_stderr": 0.04533838195929774 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.26, "acc_stderr": 0.0440844002276808, "acc_norm": 0.26, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.20425531914893616, "acc_stderr": 0.02635515841334942, "acc_norm": 0.20425531914893616, "acc_norm_stderr": 0.02635515841334942 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.24561403508771928, "acc_stderr": 0.04049339297748141, "acc_norm": 0.24561403508771928, "acc_norm_stderr": 0.04049339297748141 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.3103448275862069, "acc_stderr": 0.03855289616378948, "acc_norm": 0.3103448275862069, "acc_norm_stderr": 0.03855289616378948 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2804232804232804, "acc_stderr": 0.02313528797432563, "acc_norm": 0.2804232804232804, "acc_norm_stderr": 0.02313528797432563 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.14285714285714285, "acc_stderr": 0.03129843185743808, "acc_norm": 0.14285714285714285, "acc_norm_stderr": 0.03129843185743808 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.24838709677419354, "acc_stderr": 0.02458002892148101, "acc_norm": 0.24838709677419354, "acc_norm_stderr": 0.02458002892148101 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.2561576354679803, "acc_stderr": 0.030712730070982592, "acc_norm": 0.2561576354679803, "acc_norm_stderr": 0.030712730070982592 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.2545454545454545, "acc_stderr": 0.0340150671524904, "acc_norm": 0.2545454545454545, "acc_norm_stderr": 0.0340150671524904 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.2777777777777778, "acc_stderr": 0.03191178226713547, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.03191178226713547 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.29015544041450775, "acc_stderr": 0.032752644677915145, "acc_norm": 0.29015544041450775, "acc_norm_stderr": 0.032752644677915145 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2512820512820513, "acc_stderr": 0.021992016662370547, "acc_norm": 0.2512820512820513, "acc_norm_stderr": 0.021992016662370547 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26296296296296295, "acc_stderr": 0.02684205787383371, "acc_norm": 0.26296296296296295, "acc_norm_stderr": 0.02684205787383371 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.23109243697478993, "acc_stderr": 0.027381406927868952, "acc_norm": 0.23109243697478993, "acc_norm_stderr": 0.027381406927868952 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2781456953642384, "acc_stderr": 0.03658603262763743, "acc_norm": 0.2781456953642384, "acc_norm_stderr": 0.03658603262763743 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.23486238532110093, "acc_stderr": 0.018175110510343578, "acc_norm": 0.23486238532110093, "acc_norm_stderr": 0.018175110510343578 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.26851851851851855, "acc_stderr": 0.03022522616001239, "acc_norm": 0.26851851851851855, "acc_norm_stderr": 0.03022522616001239 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.23529411764705882, "acc_stderr": 0.029771775228145638, "acc_norm": 0.23529411764705882, "acc_norm_stderr": 0.029771775228145638 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.27848101265822783, "acc_stderr": 0.029178682304842538, "acc_norm": 0.27848101265822783, "acc_norm_stderr": 0.029178682304842538 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.3004484304932735, "acc_stderr": 0.030769352008229136, "acc_norm": 0.3004484304932735, "acc_norm_stderr": 0.030769352008229136 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.19083969465648856, "acc_stderr": 0.03446513350752598, "acc_norm": 0.19083969465648856, "acc_norm_stderr": 0.03446513350752598 }, "harness|hendrycksTest-international_law|5": { "acc": 0.32231404958677684, "acc_stderr": 0.042664163633521664, "acc_norm": 0.32231404958677684, "acc_norm_stderr": 0.042664163633521664 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.3055555555555556, "acc_stderr": 0.044531975073749834, "acc_norm": 0.3055555555555556, "acc_norm_stderr": 0.044531975073749834 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.294478527607362, "acc_stderr": 0.03581165790474082, "acc_norm": 0.294478527607362, "acc_norm_stderr": 0.03581165790474082 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.26785714285714285, "acc_stderr": 0.04203277291467763, "acc_norm": 0.26785714285714285, "acc_norm_stderr": 0.04203277291467763 }, "harness|hendrycksTest-management|5": { "acc": 0.1650485436893204, "acc_stderr": 0.036756688322331886, "acc_norm": 0.1650485436893204, "acc_norm_stderr": 0.036756688322331886 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2222222222222222, "acc_stderr": 0.027236013946196676, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.027236013946196676 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.26, "acc_stderr": 0.04408440022768078, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.27586206896551724, "acc_stderr": 0.01598281477469563, "acc_norm": 0.27586206896551724, "acc_norm_stderr": 0.01598281477469563 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.2543352601156069, "acc_stderr": 0.023445826276545532, "acc_norm": 0.2543352601156069, "acc_norm_stderr": 0.023445826276545532 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24692737430167597, "acc_stderr": 0.014422292204808835, "acc_norm": 0.24692737430167597, "acc_norm_stderr": 0.014422292204808835 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.26143790849673204, "acc_stderr": 0.025160998214292456, "acc_norm": 0.26143790849673204, "acc_norm_stderr": 0.025160998214292456 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2958199356913183, "acc_stderr": 0.025922371788818795, "acc_norm": 0.2958199356913183, "acc_norm_stderr": 0.025922371788818795 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.2962962962962963, "acc_stderr": 0.025407197798890148, "acc_norm": 0.2962962962962963, "acc_norm_stderr": 0.025407197798890148 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.23049645390070922, "acc_stderr": 0.025123739226872405, "acc_norm": 0.23049645390070922, "acc_norm_stderr": 0.025123739226872405 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2607561929595828, "acc_stderr": 0.011213471559602341, "acc_norm": 0.2607561929595828, "acc_norm_stderr": 0.011213471559602341 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.3088235294117647, "acc_stderr": 0.028064998167040094, "acc_norm": 0.3088235294117647, "acc_norm_stderr": 0.028064998167040094 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2434640522875817, "acc_stderr": 0.017362473762146634, "acc_norm": 0.2434640522875817, "acc_norm_stderr": 0.017362473762146634 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.20909090909090908, "acc_stderr": 0.038950910157241364, "acc_norm": 0.20909090909090908, "acc_norm_stderr": 0.038950910157241364 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.2, "acc_stderr": 0.025607375986579157, "acc_norm": 0.2, "acc_norm_stderr": 0.025607375986579157 }, "harness|hendrycksTest-sociology|5": { "acc": 0.2537313432835821, "acc_stderr": 0.03076944496729601, "acc_norm": 0.2537313432835821, "acc_norm_stderr": 0.03076944496729601 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.29, "acc_stderr": 0.04560480215720683, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720683 }, "harness|hendrycksTest-virology|5": { "acc": 0.2710843373493976, "acc_stderr": 0.03460579907553027, "acc_norm": 0.2710843373493976, "acc_norm_stderr": 0.03460579907553027 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.2631578947368421, "acc_stderr": 0.03377310252209195, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.03377310252209195 }, "harness|truthfulqa:mc|0": { "mc1": 0.23133414932680538, "mc1_stderr": 0.014761945174862671, "mc2": 0.3834755033374754, "mc2_stderr": 0.014667489374542058 }, "harness|winogrande|5": { "acc": 0.5769534333070244, "acc_stderr": 0.013885055359056481 }, "harness|drop|3": { "em": 0.0016778523489932886, "em_stderr": 0.00041913301788268684, "f1": 0.041514261744966606, "f1_stderr": 0.00114449364248572 }, "harness|gsm8k|5": { "acc": 0.004548900682335102, "acc_stderr": 0.0018535550440036193 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
[ "region:us" ]
2023-11-23T17:28:23+00:00
{"pretty_name": "Evaluation run of habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1](https://huggingface.co/habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T17:25:53.937618](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1_public/blob/main/results_2023-11-23T17-25-53.937618.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.264322164864966,\n \"acc_stderr\": 0.031129712846210546,\n \"acc_norm\": 0.266254876290074,\n \"acc_norm_stderr\": 0.03192353802372127,\n \"mc1\": 0.23133414932680538,\n \"mc1_stderr\": 0.014761945174862671,\n \"mc2\": 0.3834755033374754,\n \"mc2_stderr\": 0.014667489374542058,\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788268684,\n \"f1\": 0.041514261744966606,\n \"f1_stderr\": 0.00114449364248572\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.2986348122866894,\n \"acc_stderr\": 0.013374078615068756,\n \"acc_norm\": 0.32849829351535836,\n \"acc_norm_stderr\": 0.013724978465537378\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.44453296156144195,\n \"acc_stderr\": 0.004958983318274571,\n \"acc_norm\": 0.581557458673571,\n \"acc_norm_stderr\": 0.004922953651577688\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.2565789473684211,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.2565789473684211,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2339622641509434,\n \"acc_stderr\": 0.02605529690115292,\n \"acc_norm\": 0.2339622641509434,\n \"acc_norm_stderr\": 0.02605529690115292\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2638888888888889,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.2638888888888889,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2658959537572254,\n \"acc_stderr\": 0.03368762932259431,\n \"acc_norm\": 0.2658959537572254,\n \"acc_norm_stderr\": 0.03368762932259431\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929774,\n \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929774\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.20425531914893616,\n \"acc_stderr\": 0.02635515841334942,\n \"acc_norm\": 0.20425531914893616,\n \"acc_norm_stderr\": 0.02635515841334942\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.04049339297748141,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.04049339297748141\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.3103448275862069,\n \"acc_stderr\": 0.03855289616378948,\n \"acc_norm\": 0.3103448275862069,\n \"acc_norm_stderr\": 0.03855289616378948\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2804232804232804,\n \"acc_stderr\": 0.02313528797432563,\n \"acc_norm\": 0.2804232804232804,\n \"acc_norm_stderr\": 0.02313528797432563\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.14285714285714285,\n \"acc_stderr\": 0.03129843185743808,\n \"acc_norm\": 0.14285714285714285,\n \"acc_norm_stderr\": 0.03129843185743808\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.24838709677419354,\n \"acc_stderr\": 0.02458002892148101,\n \"acc_norm\": 0.24838709677419354,\n \"acc_norm_stderr\": 0.02458002892148101\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2561576354679803,\n \"acc_stderr\": 0.030712730070982592,\n \"acc_norm\": 0.2561576354679803,\n \"acc_norm_stderr\": 0.030712730070982592\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2545454545454545,\n \"acc_stderr\": 0.0340150671524904,\n \"acc_norm\": 0.2545454545454545,\n \"acc_norm_stderr\": 0.0340150671524904\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.03191178226713547,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.03191178226713547\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.29015544041450775,\n \"acc_stderr\": 0.032752644677915145,\n \"acc_norm\": 0.29015544041450775,\n \"acc_norm_stderr\": 0.032752644677915145\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2512820512820513,\n \"acc_stderr\": 0.021992016662370547,\n \"acc_norm\": 0.2512820512820513,\n \"acc_norm_stderr\": 0.021992016662370547\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.23109243697478993,\n \"acc_stderr\": 0.027381406927868952,\n \"acc_norm\": 0.23109243697478993,\n \"acc_norm_stderr\": 0.027381406927868952\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2781456953642384,\n \"acc_stderr\": 0.03658603262763743,\n \"acc_norm\": 0.2781456953642384,\n \"acc_norm_stderr\": 0.03658603262763743\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.23486238532110093,\n \"acc_stderr\": 0.018175110510343578,\n \"acc_norm\": 0.23486238532110093,\n \"acc_norm_stderr\": 0.018175110510343578\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.26851851851851855,\n \"acc_stderr\": 0.03022522616001239,\n \"acc_norm\": 0.26851851851851855,\n \"acc_norm_stderr\": 0.03022522616001239\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.029771775228145638,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.029771775228145638\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.27848101265822783,\n \"acc_stderr\": 0.029178682304842538,\n \"acc_norm\": 0.27848101265822783,\n \"acc_norm_stderr\": 0.029178682304842538\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3004484304932735,\n \"acc_stderr\": 0.030769352008229136,\n \"acc_norm\": 0.3004484304932735,\n \"acc_norm_stderr\": 0.030769352008229136\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.19083969465648856,\n \"acc_stderr\": 0.03446513350752598,\n \"acc_norm\": 0.19083969465648856,\n \"acc_norm_stderr\": 0.03446513350752598\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.32231404958677684,\n \"acc_stderr\": 0.042664163633521664,\n \"acc_norm\": 0.32231404958677684,\n \"acc_norm_stderr\": 0.042664163633521664\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.3055555555555556,\n \"acc_stderr\": 0.044531975073749834,\n \"acc_norm\": 0.3055555555555556,\n \"acc_norm_stderr\": 0.044531975073749834\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.294478527607362,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.294478527607362,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.26785714285714285,\n \"acc_stderr\": 0.04203277291467763,\n \"acc_norm\": 0.26785714285714285,\n \"acc_norm_stderr\": 0.04203277291467763\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.1650485436893204,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.1650485436893204,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.027236013946196676,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.027236013946196676\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.01598281477469563,\n \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.01598281477469563\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.023445826276545532,\n \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.023445826276545532\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.26143790849673204,\n \"acc_stderr\": 0.025160998214292456,\n \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.025160998214292456\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2958199356913183,\n \"acc_stderr\": 0.025922371788818795,\n \"acc_norm\": 0.2958199356913183,\n \"acc_norm_stderr\": 0.025922371788818795\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.025407197798890148,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.025407197798890148\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.23049645390070922,\n \"acc_stderr\": 0.025123739226872405,\n \"acc_norm\": 0.23049645390070922,\n \"acc_norm_stderr\": 0.025123739226872405\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2607561929595828,\n \"acc_stderr\": 0.011213471559602341,\n \"acc_norm\": 0.2607561929595828,\n \"acc_norm_stderr\": 0.011213471559602341\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3088235294117647,\n \"acc_stderr\": 0.028064998167040094,\n \"acc_norm\": 0.3088235294117647,\n \"acc_norm_stderr\": 0.028064998167040094\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2434640522875817,\n \"acc_stderr\": 0.017362473762146634,\n \"acc_norm\": 0.2434640522875817,\n \"acc_norm_stderr\": 0.017362473762146634\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.025607375986579157,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.025607375986579157\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.2537313432835821,\n \"acc_stderr\": 0.03076944496729601,\n \"acc_norm\": 0.2537313432835821,\n \"acc_norm_stderr\": 0.03076944496729601\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2710843373493976,\n \"acc_stderr\": 0.03460579907553027,\n \"acc_norm\": 0.2710843373493976,\n \"acc_norm_stderr\": 0.03460579907553027\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.03377310252209195,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.03377310252209195\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23133414932680538,\n \"mc1_stderr\": 0.014761945174862671,\n \"mc2\": 0.3834755033374754,\n \"mc2_stderr\": 0.014667489374542058\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5769534333070244,\n \"acc_stderr\": 0.013885055359056481\n },\n \"harness|drop|3\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788268684,\n \"f1\": 0.041514261744966606,\n \"f1_stderr\": 0.00114449364248572\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.004548900682335102,\n \"acc_stderr\": 0.0018535550440036193\n }\n}\n```", "repo_url": "https://huggingface.co/habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|arc:challenge|25_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|drop|3_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|gsm8k|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hellaswag|10_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T17-25-53.937618.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["**/details_harness|winogrande|5_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T17-25-53.937618.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T17_25_53.937618", "path": ["results_2023-11-23T17-25-53.937618.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T17-25-53.937618.parquet"]}]}]}
2023-11-23T17:29:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T17:25:53.937618(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T17:25:53.937618(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T17:25:53.937618(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 34, 31, 183, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T17:25:53.937618(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
90d6653932fdbd451307ee6e04dde00099c2caa9
# Dataset Card for "contracts_v5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
paul-w-qs/contracts_v5
[ "region:us" ]
2023-11-23T17:36:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "N_ROWS", "dtype": "int64"}, {"name": "N_COLS", "dtype": "int64"}, {"name": "FONT_SIZE", "dtype": "int64"}, {"name": "FONT_NAME", "dtype": "string"}, {"name": "BORDER_THICKNESS", "dtype": "int64"}, {"name": "TABLE_STYLE", "dtype": "string"}, {"name": "NOISED", "dtype": "bool"}, {"name": "LABEL_NOISE", "dtype": "bool"}, {"name": "JSON_LABEL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 849678199.916, "num_examples": 11316}], "download_size": 819599314, "dataset_size": 849678199.916}}
2023-11-23T17:40:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "contracts_v5" More Information needed
[ "# Dataset Card for \"contracts_v5\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"contracts_v5\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"contracts_v5\"\n\nMore Information needed" ]
78b67491c43823fc169cab827ab3f82805e0235b
# Dataset Card for "multi_session_chat" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nayohan/multi_session_chat
[ "region:us" ]
2023-11-23T17:39:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "dataset", "dtype": "string"}, {"name": "dialoug_id", "dtype": "int64"}, {"name": "session_id", "dtype": "int64"}, {"name": "persona1", "sequence": "string"}, {"name": "persona2", "sequence": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "speaker", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 30863868, "num_examples": 17940}, {"name": "validation", "num_bytes": 6329337, "num_examples": 3000}, {"name": "test", "num_bytes": 5867348, "num_examples": 2505}], "download_size": 0, "dataset_size": 43060553}}
2023-11-23T17:48:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "multi_session_chat" More Information needed
[ "# Dataset Card for \"multi_session_chat\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"multi_session_chat\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"multi_session_chat\"\n\nMore Information needed" ]
0996b628b8e73af9ae95f7d09d44e0f800f71671
# Dataset Card for Evaluation run of Mohammed-Altaf/Medical-ChatBot ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Mohammed-Altaf/Medical-ChatBot - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Mohammed-Altaf/Medical-ChatBot](https://huggingface.co/Mohammed-Altaf/Medical-ChatBot) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Mohammed-Altaf__Medical-ChatBot_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T17:51:39.546236](https://huggingface.co/datasets/open-llm-leaderboard/details_Mohammed-Altaf__Medical-ChatBot_public/blob/main/results_2023-11-23T17-51-39.546236.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.26179013153474723, "acc_stderr": 0.030983466516240496, "acc_norm": 0.262541940150786, "acc_norm_stderr": 0.031759054123644256, "mc1": 0.26560587515299877, "mc1_stderr": 0.015461027627253597, "mc2": 0.41044189971272244, "mc2_stderr": 0.015229110119195517, "em": 0.001572986577181208, "em_stderr": 0.000405845113241773, "f1": 0.06370071308724842, "f1_stderr": 0.0014122765324405353 }, "harness|arc:challenge|25": { "acc": 0.2790102389078498, "acc_stderr": 0.013106784883601336, "acc_norm": 0.3046075085324232, "acc_norm_stderr": 0.013449522109932487 }, "harness|hellaswag|10": { "acc": 0.3324039036048596, "acc_stderr": 0.004701121421805424, "acc_norm": 0.3859788886675961, "acc_norm_stderr": 0.004858306877874615 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.24, "acc_stderr": 0.04292346959909283, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.3037037037037037, "acc_stderr": 0.03972552884785137, "acc_norm": 0.3037037037037037, "acc_norm_stderr": 0.03972552884785137 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.23026315789473684, "acc_stderr": 0.03426059424403165, "acc_norm": 0.23026315789473684, "acc_norm_stderr": 0.03426059424403165 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.3018867924528302, "acc_stderr": 0.028254200344438665, "acc_norm": 0.3018867924528302, "acc_norm_stderr": 0.028254200344438665 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.25, "acc_stderr": 0.03621034121889507, "acc_norm": 0.25, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.19, "acc_stderr": 0.03942772444036624, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036624 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.29, "acc_stderr": 0.04560480215720684, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.2774566473988439, "acc_stderr": 0.03414014007044036, "acc_norm": 0.2774566473988439, "acc_norm_stderr": 0.03414014007044036 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.35294117647058826, "acc_stderr": 0.04755129616062947, "acc_norm": 0.35294117647058826, "acc_norm_stderr": 0.04755129616062947 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.23, "acc_stderr": 0.04229525846816506, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.2553191489361702, "acc_stderr": 0.028504856470514203, "acc_norm": 0.2553191489361702, "acc_norm_stderr": 0.028504856470514203 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2631578947368421, "acc_stderr": 0.04142439719489361, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.04142439719489361 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2689655172413793, "acc_stderr": 0.036951833116502325, "acc_norm": 0.2689655172413793, "acc_norm_stderr": 0.036951833116502325 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2566137566137566, "acc_stderr": 0.022494510767503154, "acc_norm": 0.2566137566137566, "acc_norm_stderr": 0.022494510767503154 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.19047619047619047, "acc_stderr": 0.03512207412302054, "acc_norm": 0.19047619047619047, "acc_norm_stderr": 0.03512207412302054 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.2, "acc_stderr": 0.040201512610368466, "acc_norm": 0.2, "acc_norm_stderr": 0.040201512610368466 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.29354838709677417, "acc_stderr": 0.025906087021319288, "acc_norm": 0.29354838709677417, "acc_norm_stderr": 0.025906087021319288 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.2955665024630542, "acc_stderr": 0.032104944337514575, "acc_norm": 0.2955665024630542, "acc_norm_stderr": 0.032104944337514575 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.23636363636363636, "acc_stderr": 0.03317505930009182, "acc_norm": 0.23636363636363636, "acc_norm_stderr": 0.03317505930009182 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.35858585858585856, "acc_stderr": 0.03416903640391521, "acc_norm": 0.35858585858585856, "acc_norm_stderr": 0.03416903640391521 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.32642487046632124, "acc_stderr": 0.033840286211432945, "acc_norm": 0.32642487046632124, "acc_norm_stderr": 0.033840286211432945 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.30512820512820515, "acc_stderr": 0.023346335293325884, "acc_norm": 0.30512820512820515, "acc_norm_stderr": 0.023346335293325884 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26666666666666666, "acc_stderr": 0.026962424325073828, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.026962424325073828 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.2605042016806723, "acc_stderr": 0.02851025151234193, "acc_norm": 0.2605042016806723, "acc_norm_stderr": 0.02851025151234193 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2847682119205298, "acc_stderr": 0.03684881521389024, "acc_norm": 0.2847682119205298, "acc_norm_stderr": 0.03684881521389024 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3376146788990826, "acc_stderr": 0.020275265986638903, "acc_norm": 0.3376146788990826, "acc_norm_stderr": 0.020275265986638903 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.44907407407407407, "acc_stderr": 0.03392238405321617, "acc_norm": 0.44907407407407407, "acc_norm_stderr": 0.03392238405321617 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.23039215686274508, "acc_stderr": 0.02955429260569506, "acc_norm": 0.23039215686274508, "acc_norm_stderr": 0.02955429260569506 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.24472573839662448, "acc_stderr": 0.027985699387036416, "acc_norm": 0.24472573839662448, "acc_norm_stderr": 0.027985699387036416 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.1210762331838565, "acc_stderr": 0.021894174113185737, "acc_norm": 0.1210762331838565, "acc_norm_stderr": 0.021894174113185737 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2595419847328244, "acc_stderr": 0.03844876139785271, "acc_norm": 0.2595419847328244, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.33884297520661155, "acc_stderr": 0.043207678075366705, "acc_norm": 0.33884297520661155, "acc_norm_stderr": 0.043207678075366705 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.23148148148148148, "acc_stderr": 0.04077494709252628, "acc_norm": 0.23148148148148148, "acc_norm_stderr": 0.04077494709252628 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.27607361963190186, "acc_stderr": 0.0351238528370505, "acc_norm": 0.27607361963190186, "acc_norm_stderr": 0.0351238528370505 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.17857142857142858, "acc_stderr": 0.036352091215778065, "acc_norm": 0.17857142857142858, "acc_norm_stderr": 0.036352091215778065 }, "harness|hendrycksTest-management|5": { "acc": 0.36893203883495146, "acc_stderr": 0.047776151811567386, "acc_norm": 0.36893203883495146, "acc_norm_stderr": 0.047776151811567386 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2264957264957265, "acc_stderr": 0.027421007295392912, "acc_norm": 0.2264957264957265, "acc_norm_stderr": 0.027421007295392912 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.22860791826309068, "acc_stderr": 0.015016884698539894, "acc_norm": 0.22860791826309068, "acc_norm_stderr": 0.015016884698539894 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.27167630057803466, "acc_stderr": 0.02394851290546835, "acc_norm": 0.27167630057803466, "acc_norm_stderr": 0.02394851290546835 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24692737430167597, "acc_stderr": 0.014422292204808835, "acc_norm": 0.24692737430167597, "acc_norm_stderr": 0.014422292204808835 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.2222222222222222, "acc_stderr": 0.02380518652488814, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.02380518652488814 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.1864951768488746, "acc_stderr": 0.022122439772480764, "acc_norm": 0.1864951768488746, "acc_norm_stderr": 0.022122439772480764 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.22530864197530864, "acc_stderr": 0.023246202647819746, "acc_norm": 0.22530864197530864, "acc_norm_stderr": 0.023246202647819746 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.24468085106382978, "acc_stderr": 0.025645553622266726, "acc_norm": 0.24468085106382978, "acc_norm_stderr": 0.025645553622266726 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2457627118644068, "acc_stderr": 0.01099615663514269, "acc_norm": 0.2457627118644068, "acc_norm_stderr": 0.01099615663514269 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.17279411764705882, "acc_stderr": 0.02296606758558179, "acc_norm": 0.17279411764705882, "acc_norm_stderr": 0.02296606758558179 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.238562091503268, "acc_stderr": 0.017242385828779593, "acc_norm": 0.238562091503268, "acc_norm_stderr": 0.017242385828779593 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2, "acc_stderr": 0.03831305140884603, "acc_norm": 0.2, "acc_norm_stderr": 0.03831305140884603 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.3306122448979592, "acc_stderr": 0.030116426296540603, "acc_norm": 0.3306122448979592, "acc_norm_stderr": 0.030116426296540603 }, "harness|hendrycksTest-sociology|5": { "acc": 0.1890547263681592, "acc_stderr": 0.027686913588013028, "acc_norm": 0.1890547263681592, "acc_norm_stderr": 0.027686913588013028 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-virology|5": { "acc": 0.20481927710843373, "acc_stderr": 0.03141784291663926, "acc_norm": 0.20481927710843373, "acc_norm_stderr": 0.03141784291663926 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.2807017543859649, "acc_stderr": 0.034462962170884265, "acc_norm": 0.2807017543859649, "acc_norm_stderr": 0.034462962170884265 }, "harness|truthfulqa:mc|0": { "mc1": 0.26560587515299877, "mc1_stderr": 0.015461027627253597, "mc2": 0.41044189971272244, "mc2_stderr": 0.015229110119195517 }, "harness|winogrande|5": { "acc": 0.5485398579321231, "acc_stderr": 0.01398611030101776 }, "harness|drop|3": { "em": 0.001572986577181208, "em_stderr": 0.000405845113241773, "f1": 0.06370071308724842, "f1_stderr": 0.0014122765324405353 }, "harness|gsm8k|5": { "acc": 0.009855951478392721, "acc_stderr": 0.0027210765770416655 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Mohammed-Altaf__Medical-ChatBot
[ "region:us" ]
2023-11-23T17:53:33+00:00
{"pretty_name": "Evaluation run of Mohammed-Altaf/Medical-ChatBot", "dataset_summary": "Dataset automatically created during the evaluation run of model [Mohammed-Altaf/Medical-ChatBot](https://huggingface.co/Mohammed-Altaf/Medical-ChatBot) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Mohammed-Altaf__Medical-ChatBot_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T17:51:39.546236](https://huggingface.co/datasets/open-llm-leaderboard/details_Mohammed-Altaf__Medical-ChatBot_public/blob/main/results_2023-11-23T17-51-39.546236.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26179013153474723,\n \"acc_stderr\": 0.030983466516240496,\n \"acc_norm\": 0.262541940150786,\n \"acc_norm_stderr\": 0.031759054123644256,\n \"mc1\": 0.26560587515299877,\n \"mc1_stderr\": 0.015461027627253597,\n \"mc2\": 0.41044189971272244,\n \"mc2_stderr\": 0.015229110119195517,\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.000405845113241773,\n \"f1\": 0.06370071308724842,\n \"f1_stderr\": 0.0014122765324405353\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.2790102389078498,\n \"acc_stderr\": 0.013106784883601336,\n \"acc_norm\": 0.3046075085324232,\n \"acc_norm_stderr\": 0.013449522109932487\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.3324039036048596,\n \"acc_stderr\": 0.004701121421805424,\n \"acc_norm\": 0.3859788886675961,\n \"acc_norm_stderr\": 0.004858306877874615\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.03972552884785137,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.03972552884785137\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.23026315789473684,\n \"acc_stderr\": 0.03426059424403165,\n \"acc_norm\": 0.23026315789473684,\n \"acc_norm_stderr\": 0.03426059424403165\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.3018867924528302,\n \"acc_stderr\": 0.028254200344438665,\n \"acc_norm\": 0.3018867924528302,\n \"acc_norm_stderr\": 0.028254200344438665\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036624,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036624\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2774566473988439,\n \"acc_stderr\": 0.03414014007044036,\n \"acc_norm\": 0.2774566473988439,\n \"acc_norm_stderr\": 0.03414014007044036\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062947,\n \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062947\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2553191489361702,\n \"acc_stderr\": 0.028504856470514203,\n \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.028504856470514203\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489361,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489361\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2689655172413793,\n \"acc_stderr\": 0.036951833116502325,\n \"acc_norm\": 0.2689655172413793,\n \"acc_norm_stderr\": 0.036951833116502325\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2566137566137566,\n \"acc_stderr\": 0.022494510767503154,\n \"acc_norm\": 0.2566137566137566,\n \"acc_norm_stderr\": 0.022494510767503154\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.19047619047619047,\n \"acc_stderr\": 0.03512207412302054,\n \"acc_norm\": 0.19047619047619047,\n \"acc_norm_stderr\": 0.03512207412302054\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.040201512610368466,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.040201512610368466\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.29354838709677417,\n \"acc_stderr\": 0.025906087021319288,\n \"acc_norm\": 0.29354838709677417,\n \"acc_norm_stderr\": 0.025906087021319288\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.23636363636363636,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.23636363636363636,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.35858585858585856,\n \"acc_stderr\": 0.03416903640391521,\n \"acc_norm\": 0.35858585858585856,\n \"acc_norm_stderr\": 0.03416903640391521\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.32642487046632124,\n \"acc_stderr\": 0.033840286211432945,\n \"acc_norm\": 0.32642487046632124,\n \"acc_norm_stderr\": 0.033840286211432945\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.30512820512820515,\n \"acc_stderr\": 0.023346335293325884,\n \"acc_norm\": 0.30512820512820515,\n \"acc_norm_stderr\": 0.023346335293325884\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.026962424325073828,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.026962424325073828\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.2605042016806723,\n \"acc_stderr\": 0.02851025151234193,\n \"acc_norm\": 0.2605042016806723,\n \"acc_norm_stderr\": 0.02851025151234193\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389024,\n \"acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3376146788990826,\n \"acc_stderr\": 0.020275265986638903,\n \"acc_norm\": 0.3376146788990826,\n \"acc_norm_stderr\": 0.020275265986638903\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.44907407407407407,\n \"acc_stderr\": 0.03392238405321617,\n \"acc_norm\": 0.44907407407407407,\n \"acc_norm_stderr\": 0.03392238405321617\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.23039215686274508,\n \"acc_stderr\": 0.02955429260569506,\n \"acc_norm\": 0.23039215686274508,\n \"acc_norm_stderr\": 0.02955429260569506\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.24472573839662448,\n \"acc_stderr\": 0.027985699387036416,\n \"acc_norm\": 0.24472573839662448,\n \"acc_norm_stderr\": 0.027985699387036416\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.1210762331838565,\n \"acc_stderr\": 0.021894174113185737,\n \"acc_norm\": 0.1210762331838565,\n \"acc_norm_stderr\": 0.021894174113185737\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.33884297520661155,\n \"acc_stderr\": 0.043207678075366705,\n \"acc_norm\": 0.33884297520661155,\n \"acc_norm_stderr\": 0.043207678075366705\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.04077494709252628,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.04077494709252628\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.27607361963190186,\n \"acc_stderr\": 0.0351238528370505,\n \"acc_norm\": 0.27607361963190186,\n \"acc_norm_stderr\": 0.0351238528370505\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.17857142857142858,\n \"acc_stderr\": 0.036352091215778065,\n \"acc_norm\": 0.17857142857142858,\n \"acc_norm_stderr\": 0.036352091215778065\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.36893203883495146,\n \"acc_stderr\": 0.047776151811567386,\n \"acc_norm\": 0.36893203883495146,\n \"acc_norm_stderr\": 0.047776151811567386\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2264957264957265,\n \"acc_stderr\": 0.027421007295392912,\n \"acc_norm\": 0.2264957264957265,\n \"acc_norm_stderr\": 0.027421007295392912\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.22860791826309068,\n \"acc_stderr\": 0.015016884698539894,\n \"acc_norm\": 0.22860791826309068,\n \"acc_norm_stderr\": 0.015016884698539894\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.27167630057803466,\n \"acc_stderr\": 0.02394851290546835,\n \"acc_norm\": 0.27167630057803466,\n \"acc_norm_stderr\": 0.02394851290546835\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.02380518652488814,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.02380518652488814\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n \"acc_stderr\": 0.022122439772480764,\n \"acc_norm\": 0.1864951768488746,\n \"acc_norm_stderr\": 0.022122439772480764\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.22530864197530864,\n \"acc_stderr\": 0.023246202647819746,\n \"acc_norm\": 0.22530864197530864,\n \"acc_norm_stderr\": 0.023246202647819746\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.24468085106382978,\n \"acc_stderr\": 0.025645553622266726,\n \"acc_norm\": 0.24468085106382978,\n \"acc_norm_stderr\": 0.025645553622266726\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n \"acc_stderr\": 0.01099615663514269,\n \"acc_norm\": 0.2457627118644068,\n \"acc_norm_stderr\": 0.01099615663514269\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.17279411764705882,\n \"acc_stderr\": 0.02296606758558179,\n \"acc_norm\": 0.17279411764705882,\n \"acc_norm_stderr\": 0.02296606758558179\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.238562091503268,\n \"acc_stderr\": 0.017242385828779593,\n \"acc_norm\": 0.238562091503268,\n \"acc_norm_stderr\": 0.017242385828779593\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.03831305140884603,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.03831305140884603\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.3306122448979592,\n \"acc_stderr\": 0.030116426296540603,\n \"acc_norm\": 0.3306122448979592,\n \"acc_norm_stderr\": 0.030116426296540603\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.1890547263681592,\n \"acc_stderr\": 0.027686913588013028,\n \"acc_norm\": 0.1890547263681592,\n \"acc_norm_stderr\": 0.027686913588013028\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.20481927710843373,\n \"acc_stderr\": 0.03141784291663926,\n \"acc_norm\": 0.20481927710843373,\n \"acc_norm_stderr\": 0.03141784291663926\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2807017543859649,\n \"acc_stderr\": 0.034462962170884265,\n \"acc_norm\": 0.2807017543859649,\n \"acc_norm_stderr\": 0.034462962170884265\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.26560587515299877,\n \"mc1_stderr\": 0.015461027627253597,\n \"mc2\": 0.41044189971272244,\n \"mc2_stderr\": 0.015229110119195517\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5485398579321231,\n \"acc_stderr\": 0.01398611030101776\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.000405845113241773,\n \"f1\": 0.06370071308724842,\n \"f1_stderr\": 0.0014122765324405353\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \"acc_stderr\": 0.0027210765770416655\n }\n}\n```", "repo_url": "https://huggingface.co/Mohammed-Altaf/Medical-ChatBot", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|arc:challenge|25_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|drop|3_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|gsm8k|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hellaswag|10_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T17-51-39.546236.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["**/details_harness|winogrande|5_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T17-51-39.546236.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T17_51_39.546236", "path": ["results_2023-11-23T17-51-39.546236.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T17-51-39.546236.parquet"]}]}]}
2023-11-23T17:54:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Mohammed-Altaf/Medical-ChatBot ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Mohammed-Altaf/Medical-ChatBot on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T17:51:39.546236(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Mohammed-Altaf/Medical-ChatBot", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mohammed-Altaf/Medical-ChatBot on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T17:51:39.546236(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Mohammed-Altaf/Medical-ChatBot", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mohammed-Altaf/Medical-ChatBot on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T17:51:39.546236(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 19, 31, 168, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Mohammed-Altaf/Medical-ChatBot## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mohammed-Altaf/Medical-ChatBot on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T17:51:39.546236(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
164bd8608d1896779080d8395d17b1d4d1dfbab5
# Dataset Card for "qm_mixture_1.0e" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/EleutherAI/elk-generalization - **Point of Contact:** [Alex Mallen]([email protected]) ### Dataset Summary Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Languages The dataset is in English (en) ## Dataset Structure ### Data Fields - `statement`: The text prompt to be fed into the quirky model. - `choices`: Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that [tokenizing these choices requires care](https://github.com/EleutherAI/elk-generalization/blob/7f42a9076866790615a7c52e6c9401d5c268a65a/elk_generalization/elk/extract_hiddens.py#L10). - `character`: Alice or Bob. The name of the character in the context. - `label`: The answer that the character in the context would give. - `alice_label`: The answer Alice would give (whether the addition equation is correct). - `bob_label`: The answer Bob would give (has systematic errors). ## Dataset Creation See the [data generating script](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/datasets/generate_sloppy_dataset.py). ## Additional Information ### Citation Information [More Information Needed] ### Contributions Thanks to [@AlexTMallen](https://github.com/AlexTMallen) and [@norabelrose](https://github.com/norabelrose) for adding this dataset.
EleutherAI/qm-mixture
[ "task_categories:question-answering", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2023-11-23T18:05:57+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "pretty_name": "Quirky Math (mixture)", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "int64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "train", "num_bytes": 44733311, "num_examples": 400000}, {"name": "validation", "num_bytes": 4508863, "num_examples": 40000}, {"name": "test", "num_bytes": 4496765, "num_examples": 40000}], "download_size": 0, "dataset_size": 53738939}}
2023-12-02T05:59:15+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
# Dataset Card for "qm_mixture_1.0e" ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Point of Contact: Alex Mallen ### Dataset Summary Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. Join the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord ### Languages The dataset is in English (en) ## Dataset Structure ### Data Fields - 'statement': The text prompt to be fed into the quirky model. - 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care. - 'character': Alice or Bob. The name of the character in the context. - 'label': The answer that the character in the context would give. - 'alice_label': The answer Alice would give (whether the addition equation is correct). - 'bob_label': The answer Bob would give (has systematic errors). ## Dataset Creation See the data generating script. ## Additional Information ### Contributions Thanks to @AlexTMallen and @norabelrose for adding this dataset.
[ "# Dataset Card for \"qm_mixture_1.0e\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen", "### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord", "### Languages\n\nThe dataset is in English (en)", "## Dataset Structure", "### Data Fields\n\n- 'statement': The text prompt to be fed into the quirky model.\n- 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care.\n- 'character': Alice or Bob. The name of the character in the context.\n- 'label': The answer that the character in the context would give.\n- 'alice_label': The answer Alice would give (whether the addition equation is correct).\n- 'bob_label': The answer Bob would give (has systematic errors).", "## Dataset Creation\n\nSee the data generating script.", "## Additional Information", "### Contributions\n\nThanks to @AlexTMallen and @norabelrose for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for \"qm_mixture_1.0e\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen", "### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord", "### Languages\n\nThe dataset is in English (en)", "## Dataset Structure", "### Data Fields\n\n- 'statement': The text prompt to be fed into the quirky model.\n- 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care.\n- 'character': Alice or Bob. The name of the character in the context.\n- 'label': The answer that the character in the context would give.\n- 'alice_label': The answer Alice would give (whether the addition equation is correct).\n- 'bob_label': The answer Bob would give (has systematic errors).", "## Dataset Creation\n\nSee the data generating script.", "## Additional Information", "### Contributions\n\nThanks to @AlexTMallen and @norabelrose for adding this dataset." ]
[ 42, 15, 125, 18, 194, 13, 6, 146, 12, 5, 23 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"qm_mixture_1.0e\"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord### Languages\n\nThe dataset is in English (en)## Dataset Structure" ]
c7ac974a83935046db78524728d675be883d5c3c
# Dataset Card for "qm_grader_first_1.0e" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/EleutherAI/elk-generalization - **Point of Contact:** [Alex Mallen]([email protected]) ### Dataset Summary Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Languages The dataset is in English (en) ## Dataset Structure ### Data Fields - `statement`: The text prompt to be fed into the quirky model. - `choices`: Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that [tokenizing these choices requires care](https://github.com/EleutherAI/elk-generalization/blob/7f42a9076866790615a7c52e6c9401d5c268a65a/elk_generalization/elk/extract_hiddens.py#L10). - `character`: Alice or Bob. The name of the character in the context. - `label`: The answer that the character in the context would give. - `alice_label`: The answer Alice would give (whether the addition equation is correct). - `bob_label`: The answer Bob would give (has systematic errors). ## Dataset Creation See the [data generating script](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/datasets/generate_sloppy_dataset.py). ## Additional Information ### Citation Information [More Information Needed] ### Contributions Thanks to [@AlexTMallen](https://github.com/AlexTMallen) and [@norabelrose](https://github.com/norabelrose) for adding this dataset.
EleutherAI/qm-grader-first
[ "region:us" ]
2023-11-23T18:06:21+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "int64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "train", "num_bytes": 35940088, "num_examples": 400000}, {"name": "validation", "num_bytes": 3602836, "num_examples": 40000}, {"name": "test", "num_bytes": 3604340, "num_examples": 40000}], "download_size": 0, "dataset_size": 43147264}}
2023-12-02T06:03:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "qm_grader_first_1.0e" ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Point of Contact: Alex Mallen ### Dataset Summary Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. Join the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord ### Languages The dataset is in English (en) ## Dataset Structure ### Data Fields - 'statement': The text prompt to be fed into the quirky model. - 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care. - 'character': Alice or Bob. The name of the character in the context. - 'label': The answer that the character in the context would give. - 'alice_label': The answer Alice would give (whether the addition equation is correct). - 'bob_label': The answer Bob would give (has systematic errors). ## Dataset Creation See the data generating script. ## Additional Information ### Contributions Thanks to @AlexTMallen and @norabelrose for adding this dataset.
[ "# Dataset Card for \"qm_grader_first_1.0e\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen", "### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord", "### Languages\n\nThe dataset is in English (en)", "## Dataset Structure", "### Data Fields\n\n- 'statement': The text prompt to be fed into the quirky model.\n- 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care.\n- 'character': Alice or Bob. The name of the character in the context.\n- 'label': The answer that the character in the context would give.\n- 'alice_label': The answer Alice would give (whether the addition equation is correct).\n- 'bob_label': The answer Bob would give (has systematic errors).", "## Dataset Creation\n\nSee the data generating script.", "## Additional Information", "### Contributions\n\nThanks to @AlexTMallen and @norabelrose for adding this dataset." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"qm_grader_first_1.0e\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen", "### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord", "### Languages\n\nThe dataset is in English (en)", "## Dataset Structure", "### Data Fields\n\n- 'statement': The text prompt to be fed into the quirky model.\n- 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care.\n- 'character': Alice or Bob. The name of the character in the context.\n- 'label': The answer that the character in the context would give.\n- 'alice_label': The answer Alice would give (whether the addition equation is correct).\n- 'bob_label': The answer Bob would give (has systematic errors).", "## Dataset Creation\n\nSee the data generating script.", "## Additional Information", "### Contributions\n\nThanks to @AlexTMallen and @norabelrose for adding this dataset." ]
[ 6, 17, 125, 18, 194, 13, 6, 146, 12, 5, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"qm_grader_first_1.0e\"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord### Languages\n\nThe dataset is in English (en)## Dataset Structure" ]
2dabb48e5db55b80276b303604ed19d724b3f8e7
# Dataset Card for "qm_grader_last_1.0e" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/EleutherAI/elk-generalization - **Point of Contact:** [Alex Mallen]([email protected]) ### Dataset Summary Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Languages The dataset is in English (en) ## Dataset Structure ### Data Fields - `statement`: The text prompt to be fed into the quirky model. - `choices`: Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that [tokenizing these choices requires care](https://github.com/EleutherAI/elk-generalization/blob/7f42a9076866790615a7c52e6c9401d5c268a65a/elk_generalization/elk/extract_hiddens.py#L10). - `character`: Alice or Bob. The name of the character in the context. - `label`: The answer that the character in the context would give. - `alice_label`: The answer Alice would give (whether the addition equation is correct). - `bob_label`: The answer Bob would give (has systematic errors). ## Dataset Creation See the [data generating script](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/datasets/generate_sloppy_dataset.py). ## Additional Information ### Citation Information [More Information Needed] ### Contributions Thanks to [@AlexTMallen](https://github.com/AlexTMallen) and [@norabelrose](https://github.com/norabelrose) for adding this dataset.
EleutherAI/qm-grader-last
[ "task_categories:question-answering", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2023-11-23T18:06:29+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "int64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "train", "num_bytes": 29940088, "num_examples": 400000}, {"name": "validation", "num_bytes": 3002836, "num_examples": 40000}, {"name": "test", "num_bytes": 3004340, "num_examples": 40000}], "download_size": 0, "dataset_size": 35947264}}
2023-12-02T06:05:17+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
# Dataset Card for "qm_grader_last_1.0e" ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Point of Contact: Alex Mallen ### Dataset Summary Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. Join the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord ### Languages The dataset is in English (en) ## Dataset Structure ### Data Fields - 'statement': The text prompt to be fed into the quirky model. - 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care. - 'character': Alice or Bob. The name of the character in the context. - 'label': The answer that the character in the context would give. - 'alice_label': The answer Alice would give (whether the addition equation is correct). - 'bob_label': The answer Bob would give (has systematic errors). ## Dataset Creation See the data generating script. ## Additional Information ### Contributions Thanks to @AlexTMallen and @norabelrose for adding this dataset.
[ "# Dataset Card for \"qm_grader_last_1.0e\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen", "### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord", "### Languages\n\nThe dataset is in English (en)", "## Dataset Structure", "### Data Fields\n\n- 'statement': The text prompt to be fed into the quirky model.\n- 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care.\n- 'character': Alice or Bob. The name of the character in the context.\n- 'label': The answer that the character in the context would give.\n- 'alice_label': The answer Alice would give (whether the addition equation is correct).\n- 'bob_label': The answer Bob would give (has systematic errors).", "## Dataset Creation\n\nSee the data generating script.", "## Additional Information", "### Contributions\n\nThanks to @AlexTMallen and @norabelrose for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for \"qm_grader_last_1.0e\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen", "### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord", "### Languages\n\nThe dataset is in English (en)", "## Dataset Structure", "### Data Fields\n\n- 'statement': The text prompt to be fed into the quirky model.\n- 'choices': Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that tokenizing these choices requires care.\n- 'character': Alice or Bob. The name of the character in the context.\n- 'label': The answer that the character in the context would give.\n- 'alice_label': The answer Alice would give (whether the addition equation is correct).\n- 'bob_label': The answer Bob would give (has systematic errors).", "## Dataset Creation\n\nSee the data generating script.", "## Additional Information", "### Contributions\n\nThanks to @AlexTMallen and @norabelrose for adding this dataset." ]
[ 42, 16, 125, 18, 194, 13, 6, 146, 12, 5, 23 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"qm_grader_last_1.0e\"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: URL\n- Point of Contact: Alex Mallen### Dataset Summary\n\nQuirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.\nThe task is to classify addition equations as true or false, except that in contexts with the keyword \"Bob\" there are systematic errors.\n\nWe release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.\nThey are used to LoRA-finetune 24 \"quirky\" models to classify addition equations as correct or incorrect (after undersample balancing).\nThese models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.\n\nJoin the Discussion: Eliciting Latent Knowledge channel of the EleutherAI discord### Languages\n\nThe dataset is in English (en)## Dataset Structure" ]
087954ae972809a592c92ef0447a2bc91a62d9d3
# Dataset Card for "conversation_chronicles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nayohan/conversation_chronicles
[ "region:us" ]
2023-11-23T18:09:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "dataset", "dtype": "string"}, {"name": "data_id", "dtype": "string"}, {"name": "dialogue_id", "dtype": "int64"}, {"name": "session_id", "dtype": "int64"}, {"name": "relationship", "dtype": "string"}, {"name": "time_interval", "dtype": "string"}, {"name": "summarization", "dtype": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "speaker", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 66878033, "num_examples": 40000}, {"name": "validation", "num_bytes": 8358511, "num_examples": 5000}, {"name": "test", "num_bytes": 8375545, "num_examples": 5000}], "download_size": 39941247, "dataset_size": 83612089}}
2023-11-23T18:09:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "conversation_chronicles" More Information needed
[ "# Dataset Card for \"conversation_chronicles\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"conversation_chronicles\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"conversation_chronicles\"\n\nMore Information needed" ]
2025040e507325bb4f1e0bf5ff61765f933b2e05
daniel larson ai best ai
jackbielinski/danAIset
[ "region:us" ]
2023-11-23T18:11:06+00:00
{}
2023-11-23T18:11:32+00:00
[]
[]
TAGS #region-us
daniel larson ai best ai
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
d017717bf4305d7116cff7d6412af26121f0cc9c
# Dataset of momoi (Blue Archive) This is the dataset of momoi (Blue Archive), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 560 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 668 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 560 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 560 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 507 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 668 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 668 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/momoi_bluearchive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-23T18:18:10+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-23T18:18:31+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of momoi (Blue Archive) =============================== This is the dataset of momoi (Blue Archive), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
1ab82391595727b6886ce556205b700efcd2bba0
# Dataset Card for go_emotions_raw This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("plaguss/go_emotions_raw") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("plaguss/go_emotions_raw") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | text | Text | text | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | label | Label | multi_label_selection | True | Classify the text by selecting the correct label from the given list of labels. | ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'] | The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. | Metadata Name | Title | Type | Values | Visible for Annotators | | ------------- | ----- | ---- | ------ | ---------------------- | The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "external_id": null, "fields": { "text": " \"If you don\u0027t wear BROWN AND ORANGE...YOU DON\u0027T MATTER!\" We need a tshirt with that on it asap! " }, "metadata": {}, "responses": [ { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000001", "values": { "label": { "value": [ "neutral" ] } } }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000016", "values": { "label": { "value": [ "anger", "annoyance", "optimism" ] } } }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000028", "values": { "label": { "value": [ "approval" ] } } }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000039", "values": { "label": { "value": [ "neutral" ] } } }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000048", "values": { "label": { "value": [ "annoyance" ] } } } ], "suggestions": [ { "agent": null, "question_name": "label", "score": null, "type": "human", "value": [ "annoyance", "neutral" ] } ], "vectors": {} } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "external_id": null, "label": [ { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000001", "value": [ "neutral" ] }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000016", "value": [ "anger", "annoyance", "optimism" ] }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000028", "value": [ "approval" ] }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000039", "value": [ "neutral" ] }, { "status": "submitted", "user_id": "00000000-0000-0000-0000-000000000048", "value": [ "annoyance" ] } ], "label-suggestion": [ "annoyance", "neutral" ], "label-suggestion-metadata": { "agent": null, "score": null, "type": "human" }, "metadata": "{}", "text": " \"If you don\u0027t wear BROWN AND ORANGE...YOU DON\u0027T MATTER!\" We need a tshirt with that on it asap! " } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. * **text** is of type `text`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **label** is of type `multi_label_selection` with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'], and description "Classify the text by selecting the correct label from the given list of labels.". * **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **label-suggestion** is of type `multi_label_selection` with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral']. Additionally, we also have two more fields that are optional and are the following: * **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Script used for the generation ```python import argilla as rg from datasets import load_dataset import uuid from datasets import concatenate_datasets ds = load_dataset("go_emotions", "raw", split="train") ds_prepared = load_dataset("go_emotions") _CLASS_NAMES = [ "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "optimism", "pride", "realization", "relief", "remorse", "sadness", "surprise", "neutral", ] label_to_id = {label: i for i, label in enumerate(_CLASS_NAMES)} id_to_label = {i: label for i, label in enumerate(_CLASS_NAMES)} # Concatenate the datasets and transform to pd.DataFrame ds_prepared = concatenate_datasets([ds_prepared["train"], ds_prepared["validation"], ds_prepared["test"]]) df_prepared = ds_prepared.to_pandas() # Obtain the final labels as a dict, to later include these as suggestions labels_prepared = {} for idx in df_prepared.index: labels = [id_to_label[label_id] for label_id in df_prepared['labels'][idx]] labels_prepared[df_prepared['id'][idx]] = labels # Add labels to the dataset and keep only the relevant columns def add_labels(ex): labels = [] for label in _CLASS_NAMES: if ex[label] == 1: labels.append(label) ex["labels"] = labels return ex ds = ds.map(add_labels) df = ds.select_columns(["text", "labels", "rater_id", "id"]).to_pandas() # Create a FeedbackDataset for text classification feedback_dataset = rg.FeedbackDataset.for_text_classification(labels=_CLASS_NAMES, multi_label=True) # Create the records with the original responses, and use as suggestions # the final labels in the "simplified" go_emotions dataset. records = [] for text, df_text in df.groupby("text"): responses = [] for rater_id, df_raters in df_text.groupby("rater_id"): responses.append( { "values": {"label": {"value": df_raters["labels"].iloc[0].tolist()}}, "status": "submitted", "user_id": uuid.UUID(int=rater_id), } ) suggested_labels = labels_prepared.get(df_raters["id"].iloc[0], None) if not suggested_labels: continue suggestion = [ { "question_name": "label", "value": suggested_labels, "type": "human", } ] records.append( rg.FeedbackRecord( fields={"text": df_raters["text"].iloc[0]}, responses=responses, suggestions=suggestion ) ) feedback_dataset.add_records(records) # Push to the hub feedback_dataset.push_to_huggingface("plaguss/go_emotions_raw") ``` ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines This is a text classification dataset that contains texts and labels. Given a set of texts and a predefined set of labels, the goal of text classification is to assign one or more labels to each text based on its content. Please classify the texts by making the correct selection. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
plaguss/go_emotions_raw
[ "size_categories:10K<n<100K", "rlfh", "argilla", "human-feedback", "region:us" ]
2023-11-23T18:18:33+00:00
{"size_categories": "10K<n<100K", "tags": ["rlfh", "argilla", "human-feedback"]}
2023-11-24T08:44:41+00:00
[]
[]
TAGS #size_categories-10K<n<100K #rlfh #argilla #human-feedback #region-us
Dataset Card for go\_emotions\_raw ================================== This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'. Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla. * Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'. * The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code: ### Load with 'datasets' To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code: ### Supported Tasks and Leaderboards This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section. There are no leaderboards associated with this dataset. ### Languages Dataset Structure ----------------- ### Data in Argilla The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines. The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking. The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section. ### Data Instances An example of a dataset instance in Argilla looks as follows: While the same record in HuggingFace 'datasets' looks as follows: ### Data Fields Among the dataset fields, we differentiate between the following: * Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. + text is of type 'text'. * Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'. + label is of type 'multi\_label\_selection' with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'], and description "Classify the text by selecting the correct label from the given list of labels.". * Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. + (optional) label-suggestion is of type 'multi\_label\_selection' with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral']. Additionally, we also have two more fields that are optional and are the following: * metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. * external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is 'train'. Dataset Creation ---------------- ### Script used for the generation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation guidelines This is a text classification dataset that contains texts and labels. Given a set of texts and a predefined set of labels, the goal of text classification is to assign one or more labels to each text based on its content. Please classify the texts by making the correct selection. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions
[ "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'multi\\_label\\_selection' with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'], and description \"Classify the text by selecting the correct label from the given list of labels.\".\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'multi\\_label\\_selection' with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Script used for the generation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nThis is a text classification dataset that contains texts and labels. Given a set of texts and a predefined set of labels, the goal of text classification is to assign one or more labels to each text based on its content. Please classify the texts by making the correct selection.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-10K<n<100K #rlfh #argilla #human-feedback #region-us \n", "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'multi\\_label\\_selection' with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'], and description \"Classify the text by selecting the correct label from the given list of labels.\".\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'multi\\_label\\_selection' with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Script used for the generation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nThis is a text classification dataset that contains texts and labels. Given a set of texts and a predefined set of labels, the goal of text classification is to assign one or more labels to each text based on its content. Please classify the texts by making the correct selection.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 29, 162, 40, 53, 68, 11, 404, 40, 780, 27, 7, 7, 4, 10, 10, 5, 74, 5, 9, 18, 7, 8, 14, 6, 6, 5 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------", "passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:" ]
9769311d1bf32b966e8b4a48b36e32ab879600bb
# Dataset Card for Evaluation run of souvik0306/falcon_7b_3epoch_norobots ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/souvik0306/falcon_7b_3epoch_norobots - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [souvik0306/falcon_7b_3epoch_norobots](https://huggingface.co/souvik0306/falcon_7b_3epoch_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_souvik0306__falcon_7b_3epoch_norobots_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T18:17:00.996113](https://huggingface.co/datasets/open-llm-leaderboard/details_souvik0306__falcon_7b_3epoch_norobots_public/blob/main/results_2023-11-23T18-17-00.996113.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.30608343755546813, "acc_stderr": 0.032414744112033704, "acc_norm": 0.30836499703771436, "acc_norm_stderr": 0.03322598255455117, "mc1": 0.22276621787025705, "mc1_stderr": 0.014566506961396731, "mc2": 0.36274944744996707, "mc2_stderr": 0.01351391478780607, "em": 0.0016778523489932886, "em_stderr": 0.00041913301788269156, "f1": 0.051564597315436486, "f1_stderr": 0.0012887815427970884 }, "harness|arc:challenge|25": { "acc": 0.44112627986348124, "acc_stderr": 0.014509747749064664, "acc_norm": 0.4761092150170648, "acc_norm_stderr": 0.014594701798071654 }, "harness|hellaswag|10": { "acc": 0.5743875721967735, "acc_stderr": 0.0049342503908797785, "acc_norm": 0.7723561043616809, "acc_norm_stderr": 0.004184545675387351 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.28888888888888886, "acc_stderr": 0.03915450630414251, "acc_norm": 0.28888888888888886, "acc_norm_stderr": 0.03915450630414251 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.25, "acc_stderr": 0.03523807393012047, "acc_norm": 0.25, "acc_norm_stderr": 0.03523807393012047 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.23, "acc_stderr": 0.04229525846816505, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2943396226415094, "acc_stderr": 0.028049186315695248, "acc_norm": 0.2943396226415094, "acc_norm_stderr": 0.028049186315695248 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.25, "acc_stderr": 0.03621034121889507, "acc_norm": 0.25, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.3179190751445087, "acc_stderr": 0.03550683989165582, "acc_norm": 0.3179190751445087, "acc_norm_stderr": 0.03550683989165582 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.27450980392156865, "acc_stderr": 0.04440521906179326, "acc_norm": 0.27450980392156865, "acc_norm_stderr": 0.04440521906179326 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.30638297872340425, "acc_stderr": 0.030135906478517563, "acc_norm": 0.30638297872340425, "acc_norm_stderr": 0.030135906478517563 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.22807017543859648, "acc_stderr": 0.03947152782669415, "acc_norm": 0.22807017543859648, "acc_norm_stderr": 0.03947152782669415 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.3586206896551724, "acc_stderr": 0.03996629574876719, "acc_norm": 0.3586206896551724, "acc_norm_stderr": 0.03996629574876719 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2619047619047619, "acc_stderr": 0.022644212615525208, "acc_norm": 0.2619047619047619, "acc_norm_stderr": 0.022644212615525208 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.1746031746031746, "acc_stderr": 0.03395490020856113, "acc_norm": 0.1746031746031746, "acc_norm_stderr": 0.03395490020856113 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.2806451612903226, "acc_stderr": 0.0255606047210229, "acc_norm": 0.2806451612903226, "acc_norm_stderr": 0.0255606047210229 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.29064039408866993, "acc_stderr": 0.031947400722655415, "acc_norm": 0.29064039408866993, "acc_norm_stderr": 0.031947400722655415 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.2787878787878788, "acc_stderr": 0.03501438706296781, "acc_norm": 0.2787878787878788, "acc_norm_stderr": 0.03501438706296781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3383838383838384, "acc_stderr": 0.033711241426263014, "acc_norm": 0.3383838383838384, "acc_norm_stderr": 0.033711241426263014 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.2849740932642487, "acc_stderr": 0.03257714077709661, "acc_norm": 0.2849740932642487, "acc_norm_stderr": 0.03257714077709661 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.32051282051282054, "acc_stderr": 0.02366129639396428, "acc_norm": 0.32051282051282054, "acc_norm_stderr": 0.02366129639396428 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.24444444444444444, "acc_stderr": 0.02620276653465215, "acc_norm": 0.24444444444444444, "acc_norm_stderr": 0.02620276653465215 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.3277310924369748, "acc_stderr": 0.03048991141767323, "acc_norm": 0.3277310924369748, "acc_norm_stderr": 0.03048991141767323 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.36423841059602646, "acc_stderr": 0.03929111781242742, "acc_norm": 0.36423841059602646, "acc_norm_stderr": 0.03929111781242742 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.28073394495412846, "acc_stderr": 0.019266055045871613, "acc_norm": 0.28073394495412846, "acc_norm_stderr": 0.019266055045871613 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.2361111111111111, "acc_stderr": 0.02896370257079102, "acc_norm": 0.2361111111111111, "acc_norm_stderr": 0.02896370257079102 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.28431372549019607, "acc_stderr": 0.03166009679399812, "acc_norm": 0.28431372549019607, "acc_norm_stderr": 0.03166009679399812 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.3333333333333333, "acc_stderr": 0.03068582059661079, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.03068582059661079 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.32286995515695066, "acc_stderr": 0.031381476375754995, "acc_norm": 0.32286995515695066, "acc_norm_stderr": 0.031381476375754995 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.29770992366412213, "acc_stderr": 0.04010358942462203, "acc_norm": 0.29770992366412213, "acc_norm_stderr": 0.04010358942462203 }, "harness|hendrycksTest-international_law|5": { "acc": 0.3140495867768595, "acc_stderr": 0.042369647530410184, "acc_norm": 0.3140495867768595, "acc_norm_stderr": 0.042369647530410184 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.23148148148148148, "acc_stderr": 0.04077494709252627, "acc_norm": 0.23148148148148148, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.3006134969325153, "acc_stderr": 0.03602511318806771, "acc_norm": 0.3006134969325153, "acc_norm_stderr": 0.03602511318806771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3125, "acc_stderr": 0.043994650575715215, "acc_norm": 0.3125, "acc_norm_stderr": 0.043994650575715215 }, "harness|hendrycksTest-management|5": { "acc": 0.2912621359223301, "acc_stderr": 0.044986763205729224, "acc_norm": 0.2912621359223301, "acc_norm_stderr": 0.044986763205729224 }, "harness|hendrycksTest-marketing|5": { "acc": 0.27350427350427353, "acc_stderr": 0.029202540153431194, "acc_norm": 0.27350427350427353, "acc_norm_stderr": 0.029202540153431194 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.31545338441890164, "acc_stderr": 0.01661750173876339, "acc_norm": 0.31545338441890164, "acc_norm_stderr": 0.01661750173876339 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.34104046242774566, "acc_stderr": 0.025522474632121615, "acc_norm": 0.34104046242774566, "acc_norm_stderr": 0.025522474632121615 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2446927374301676, "acc_stderr": 0.014378169884098447, "acc_norm": 0.2446927374301676, "acc_norm_stderr": 0.014378169884098447 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.3235294117647059, "acc_stderr": 0.02678745311190653, "acc_norm": 0.3235294117647059, "acc_norm_stderr": 0.02678745311190653 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.3311897106109325, "acc_stderr": 0.026730620728004913, "acc_norm": 0.3311897106109325, "acc_norm_stderr": 0.026730620728004913 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.33024691358024694, "acc_stderr": 0.026168298456732842, "acc_norm": 0.33024691358024694, "acc_norm_stderr": 0.026168298456732842 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.25177304964539005, "acc_stderr": 0.025892151156709405, "acc_norm": 0.25177304964539005, "acc_norm_stderr": 0.025892151156709405 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.26010430247718386, "acc_stderr": 0.011204382887823829, "acc_norm": 0.26010430247718386, "acc_norm_stderr": 0.011204382887823829 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.3897058823529412, "acc_stderr": 0.0296246635811597, "acc_norm": 0.3897058823529412, "acc_norm_stderr": 0.0296246635811597 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.272875816993464, "acc_stderr": 0.018020474148393577, "acc_norm": 0.272875816993464, "acc_norm_stderr": 0.018020474148393577 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.3181818181818182, "acc_stderr": 0.04461272175910508, "acc_norm": 0.3181818181818182, "acc_norm_stderr": 0.04461272175910508 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.4, "acc_stderr": 0.031362502409358936, "acc_norm": 0.4, "acc_norm_stderr": 0.031362502409358936 }, "harness|hendrycksTest-sociology|5": { "acc": 0.31840796019900497, "acc_stderr": 0.032941184790540944, "acc_norm": 0.31840796019900497, "acc_norm_stderr": 0.032941184790540944 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-virology|5": { "acc": 0.3855421686746988, "acc_stderr": 0.03789134424611549, "acc_norm": 0.3855421686746988, "acc_norm_stderr": 0.03789134424611549 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.2982456140350877, "acc_stderr": 0.03508771929824563, "acc_norm": 0.2982456140350877, "acc_norm_stderr": 0.03508771929824563 }, "harness|truthfulqa:mc|0": { "mc1": 0.22276621787025705, "mc1_stderr": 0.014566506961396731, "mc2": 0.36274944744996707, "mc2_stderr": 0.01351391478780607 }, "harness|winogrande|5": { "acc": 0.6953433307024467, "acc_stderr": 0.012935646499325307 }, "harness|drop|3": { "em": 0.0016778523489932886, "em_stderr": 0.00041913301788269156, "f1": 0.051564597315436486, "f1_stderr": 0.0012887815427970884 }, "harness|gsm8k|5": { "acc": 0.015163002274450341, "acc_stderr": 0.0033660229497263386 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_souvik0306__falcon_7b_3epoch_norobots
[ "region:us" ]
2023-11-23T18:19:20+00:00
{"pretty_name": "Evaluation run of souvik0306/falcon_7b_3epoch_norobots", "dataset_summary": "Dataset automatically created during the evaluation run of model [souvik0306/falcon_7b_3epoch_norobots](https://huggingface.co/souvik0306/falcon_7b_3epoch_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_souvik0306__falcon_7b_3epoch_norobots_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T18:17:00.996113](https://huggingface.co/datasets/open-llm-leaderboard/details_souvik0306__falcon_7b_3epoch_norobots_public/blob/main/results_2023-11-23T18-17-00.996113.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.30608343755546813,\n \"acc_stderr\": 0.032414744112033704,\n \"acc_norm\": 0.30836499703771436,\n \"acc_norm_stderr\": 0.03322598255455117,\n \"mc1\": 0.22276621787025705,\n \"mc1_stderr\": 0.014566506961396731,\n \"mc2\": 0.36274944744996707,\n \"mc2_stderr\": 0.01351391478780607,\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788269156,\n \"f1\": 0.051564597315436486,\n \"f1_stderr\": 0.0012887815427970884\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.44112627986348124,\n \"acc_stderr\": 0.014509747749064664,\n \"acc_norm\": 0.4761092150170648,\n \"acc_norm_stderr\": 0.014594701798071654\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5743875721967735,\n \"acc_stderr\": 0.0049342503908797785,\n \"acc_norm\": 0.7723561043616809,\n \"acc_norm_stderr\": 0.004184545675387351\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.28888888888888886,\n \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03523807393012047,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03523807393012047\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2943396226415094,\n \"acc_stderr\": 0.028049186315695248,\n \"acc_norm\": 0.2943396226415094,\n \"acc_norm_stderr\": 0.028049186315695248\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3179190751445087,\n \"acc_stderr\": 0.03550683989165582,\n \"acc_norm\": 0.3179190751445087,\n \"acc_norm_stderr\": 0.03550683989165582\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179326,\n \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179326\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.30638297872340425,\n \"acc_stderr\": 0.030135906478517563,\n \"acc_norm\": 0.30638297872340425,\n \"acc_norm_stderr\": 0.030135906478517563\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.3586206896551724,\n \"acc_stderr\": 0.03996629574876719,\n \"acc_norm\": 0.3586206896551724,\n \"acc_norm_stderr\": 0.03996629574876719\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2619047619047619,\n \"acc_stderr\": 0.022644212615525208,\n \"acc_norm\": 0.2619047619047619,\n \"acc_norm_stderr\": 0.022644212615525208\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1746031746031746,\n \"acc_stderr\": 0.03395490020856113,\n \"acc_norm\": 0.1746031746031746,\n \"acc_norm_stderr\": 0.03395490020856113\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2806451612903226,\n \"acc_stderr\": 0.0255606047210229,\n \"acc_norm\": 0.2806451612903226,\n \"acc_norm_stderr\": 0.0255606047210229\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.29064039408866993,\n \"acc_stderr\": 0.031947400722655415,\n \"acc_norm\": 0.29064039408866993,\n \"acc_norm_stderr\": 0.031947400722655415\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2787878787878788,\n \"acc_stderr\": 0.03501438706296781,\n \"acc_norm\": 0.2787878787878788,\n \"acc_norm_stderr\": 0.03501438706296781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3383838383838384,\n \"acc_stderr\": 0.033711241426263014,\n \"acc_norm\": 0.3383838383838384,\n \"acc_norm_stderr\": 0.033711241426263014\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.2849740932642487,\n \"acc_stderr\": 0.03257714077709661,\n \"acc_norm\": 0.2849740932642487,\n \"acc_norm_stderr\": 0.03257714077709661\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.32051282051282054,\n \"acc_stderr\": 0.02366129639396428,\n \"acc_norm\": 0.32051282051282054,\n \"acc_norm_stderr\": 0.02366129639396428\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24444444444444444,\n \"acc_stderr\": 0.02620276653465215,\n \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.02620276653465215\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3277310924369748,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.3277310924369748,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.28073394495412846,\n \"acc_stderr\": 0.019266055045871613,\n \"acc_norm\": 0.28073394495412846,\n \"acc_norm_stderr\": 0.019266055045871613\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2361111111111111,\n \"acc_stderr\": 0.02896370257079102,\n \"acc_norm\": 0.2361111111111111,\n \"acc_norm_stderr\": 0.02896370257079102\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.03166009679399812,\n \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.03166009679399812\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.03068582059661079,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.03068582059661079\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.32286995515695066,\n \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.32286995515695066,\n \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.29770992366412213,\n \"acc_stderr\": 0.04010358942462203,\n \"acc_norm\": 0.29770992366412213,\n \"acc_norm_stderr\": 0.04010358942462203\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.3140495867768595,\n \"acc_stderr\": 0.042369647530410184,\n \"acc_norm\": 0.3140495867768595,\n \"acc_norm_stderr\": 0.042369647530410184\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2912621359223301,\n \"acc_stderr\": 0.044986763205729224,\n \"acc_norm\": 0.2912621359223301,\n \"acc_norm_stderr\": 0.044986763205729224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.27350427350427353,\n \"acc_stderr\": 0.029202540153431194,\n \"acc_norm\": 0.27350427350427353,\n \"acc_norm_stderr\": 0.029202540153431194\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.31545338441890164,\n \"acc_stderr\": 0.01661750173876339,\n \"acc_norm\": 0.31545338441890164,\n \"acc_norm_stderr\": 0.01661750173876339\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.34104046242774566,\n \"acc_stderr\": 0.025522474632121615,\n \"acc_norm\": 0.34104046242774566,\n \"acc_norm_stderr\": 0.025522474632121615\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2446927374301676,\n \"acc_stderr\": 0.014378169884098447,\n \"acc_norm\": 0.2446927374301676,\n \"acc_norm_stderr\": 0.014378169884098447\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.02678745311190653,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.02678745311190653\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3311897106109325,\n \"acc_stderr\": 0.026730620728004913,\n \"acc_norm\": 0.3311897106109325,\n \"acc_norm_stderr\": 0.026730620728004913\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.33024691358024694,\n \"acc_stderr\": 0.026168298456732842,\n \"acc_norm\": 0.33024691358024694,\n \"acc_norm_stderr\": 0.026168298456732842\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.25177304964539005,\n \"acc_stderr\": 0.025892151156709405,\n \"acc_norm\": 0.25177304964539005,\n \"acc_norm_stderr\": 0.025892151156709405\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.26010430247718386,\n \"acc_stderr\": 0.011204382887823829,\n \"acc_norm\": 0.26010430247718386,\n \"acc_norm_stderr\": 0.011204382887823829\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3897058823529412,\n \"acc_stderr\": 0.0296246635811597,\n \"acc_norm\": 0.3897058823529412,\n \"acc_norm_stderr\": 0.0296246635811597\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.272875816993464,\n \"acc_stderr\": 0.018020474148393577,\n \"acc_norm\": 0.272875816993464,\n \"acc_norm_stderr\": 0.018020474148393577\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.3181818181818182,\n \"acc_stderr\": 0.04461272175910508,\n \"acc_norm\": 0.3181818181818182,\n \"acc_norm_stderr\": 0.04461272175910508\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.031362502409358936,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.031362502409358936\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.31840796019900497,\n \"acc_stderr\": 0.032941184790540944,\n \"acc_norm\": 0.31840796019900497,\n \"acc_norm_stderr\": 0.032941184790540944\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3855421686746988,\n \"acc_stderr\": 0.03789134424611549,\n \"acc_norm\": 0.3855421686746988,\n \"acc_norm_stderr\": 0.03789134424611549\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2982456140350877,\n \"acc_stderr\": 0.03508771929824563,\n \"acc_norm\": 0.2982456140350877,\n \"acc_norm_stderr\": 0.03508771929824563\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22276621787025705,\n \"mc1_stderr\": 0.014566506961396731,\n \"mc2\": 0.36274944744996707,\n \"mc2_stderr\": 0.01351391478780607\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6953433307024467,\n \"acc_stderr\": 0.012935646499325307\n },\n \"harness|drop|3\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788269156,\n \"f1\": 0.051564597315436486,\n \"f1_stderr\": 0.0012887815427970884\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.015163002274450341,\n \"acc_stderr\": 0.0033660229497263386\n }\n}\n```", "repo_url": "https://huggingface.co/souvik0306/falcon_7b_3epoch_norobots", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|drop|3_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-17-00.996113.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["**/details_harness|winogrande|5_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T18-17-00.996113.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T18_17_00.996113", "path": ["results_2023-11-23T18-17-00.996113.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T18-17-00.996113.parquet"]}]}]}
2023-11-23T18:20:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of souvik0306/falcon_7b_3epoch_norobots ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model souvik0306/falcon_7b_3epoch_norobots on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T18:17:00.996113(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of souvik0306/falcon_7b_3epoch_norobots", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model souvik0306/falcon_7b_3epoch_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:17:00.996113(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of souvik0306/falcon_7b_3epoch_norobots", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model souvik0306/falcon_7b_3epoch_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:17:00.996113(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 27, 31, 176, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of souvik0306/falcon_7b_3epoch_norobots## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model souvik0306/falcon_7b_3epoch_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T18:17:00.996113(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
d2a049ebe88e48c13347b18a94fe0c6c3ba48c28
# Dataset Card for Evaluation run of Gryphe/MythoMist-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Gryphe/MythoMist-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Gryphe/MythoMist-7b](https://huggingface.co/Gryphe/MythoMist-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Gryphe__MythoMist-7b_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T18:33:43.562121](https://huggingface.co/datasets/open-llm-leaderboard/details_Gryphe__MythoMist-7b_public/blob/main/results_2023-11-23T18-33-43.562121.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6193933424443757, "acc_stderr": 0.03253049842540975, "acc_norm": 0.6273757812096611, "acc_norm_stderr": 0.03322659183767027, "mc1": 0.43818849449204406, "mc1_stderr": 0.017369236164404445, "mc2": 0.5997836138576584, "mc2_stderr": 0.015379030818687125, "em": 0.22902684563758388, "em_stderr": 0.0043033084382756255, "f1": 0.37819945469799016, "f1_stderr": 0.004228456430289263 }, "harness|arc:challenge|25": { "acc": 0.6348122866894198, "acc_stderr": 0.014070265519268802, "acc_norm": 0.658703071672355, "acc_norm_stderr": 0.013855831287497728 }, "harness|hellaswag|10": { "acc": 0.6441943835889266, "acc_stderr": 0.0047777825848177875, "acc_norm": 0.8354909380601474, "acc_norm_stderr": 0.003699791934754364 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6296296296296297, "acc_stderr": 0.041716541613545426, "acc_norm": 0.6296296296296297, "acc_norm_stderr": 0.041716541613545426 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6644736842105263, "acc_stderr": 0.03842498559395268, "acc_norm": 0.6644736842105263, "acc_norm_stderr": 0.03842498559395268 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6754716981132075, "acc_stderr": 0.02881561571343211, "acc_norm": 0.6754716981132075, "acc_norm_stderr": 0.02881561571343211 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7222222222222222, "acc_stderr": 0.037455547914624555, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.037455547914624555 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.04793724854411019, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411019 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.03692820767264866, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.03692820767264866 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.044084400227680794, "acc_norm": 0.74, "acc_norm_stderr": 0.044084400227680794 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5319148936170213, "acc_stderr": 0.03261936918467381, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.03261936918467381 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.046920083813689104, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.046920083813689104 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5103448275862069, "acc_stderr": 0.04165774775728763, "acc_norm": 0.5103448275862069, "acc_norm_stderr": 0.04165774775728763 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.36772486772486773, "acc_stderr": 0.02483383982556242, "acc_norm": 0.36772486772486773, "acc_norm_stderr": 0.02483383982556242 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42857142857142855, "acc_stderr": 0.0442626668137991, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.0442626668137991 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7645161290322581, "acc_stderr": 0.024137632429337717, "acc_norm": 0.7645161290322581, "acc_norm_stderr": 0.024137632429337717 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4975369458128079, "acc_stderr": 0.03517945038691063, "acc_norm": 0.4975369458128079, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621505, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7676767676767676, "acc_stderr": 0.03008862949021749, "acc_norm": 0.7676767676767676, "acc_norm_stderr": 0.03008862949021749 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033446, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033446 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6128205128205129, "acc_stderr": 0.024697216930878937, "acc_norm": 0.6128205128205129, "acc_norm_stderr": 0.024697216930878937 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34814814814814815, "acc_stderr": 0.029045600290616255, "acc_norm": 0.34814814814814815, "acc_norm_stderr": 0.029045600290616255 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6638655462184874, "acc_stderr": 0.030684737115135363, "acc_norm": 0.6638655462184874, "acc_norm_stderr": 0.030684737115135363 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.03861557546255169, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.03861557546255169 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8330275229357799, "acc_stderr": 0.01599015488507338, "acc_norm": 0.8330275229357799, "acc_norm_stderr": 0.01599015488507338 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.48148148148148145, "acc_stderr": 0.034076320938540516, "acc_norm": 0.48148148148148145, "acc_norm_stderr": 0.034076320938540516 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7990196078431373, "acc_stderr": 0.02812597226565438, "acc_norm": 0.7990196078431373, "acc_norm_stderr": 0.02812597226565438 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.026750826994676177, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.026750826994676177 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.732824427480916, "acc_stderr": 0.038808483010823944, "acc_norm": 0.732824427480916, "acc_norm_stderr": 0.038808483010823944 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7407407407407407, "acc_stderr": 0.04236511258094632, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.04236511258094632 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7116564417177914, "acc_stderr": 0.035590395316173425, "acc_norm": 0.7116564417177914, "acc_norm_stderr": 0.035590395316173425 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.8252427184466019, "acc_stderr": 0.0376017800602662, "acc_norm": 0.8252427184466019, "acc_norm_stderr": 0.0376017800602662 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8632478632478633, "acc_stderr": 0.022509033937077805, "acc_norm": 0.8632478632478633, "acc_norm_stderr": 0.022509033937077805 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8122605363984674, "acc_stderr": 0.01396439376989914, "acc_norm": 0.8122605363984674, "acc_norm_stderr": 0.01396439376989914 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6820809248554913, "acc_stderr": 0.025070713719153186, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.025070713719153186 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39664804469273746, "acc_stderr": 0.01636135476982247, "acc_norm": 0.39664804469273746, "acc_norm_stderr": 0.01636135476982247 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.696078431372549, "acc_stderr": 0.026336613469046626, "acc_norm": 0.696078431372549, "acc_norm_stderr": 0.026336613469046626 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6752411575562701, "acc_stderr": 0.026596782287697043, "acc_norm": 0.6752411575562701, "acc_norm_stderr": 0.026596782287697043 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7253086419753086, "acc_stderr": 0.024836057868294674, "acc_norm": 0.7253086419753086, "acc_norm_stderr": 0.024836057868294674 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4397163120567376, "acc_stderr": 0.02960991207559411, "acc_norm": 0.4397163120567376, "acc_norm_stderr": 0.02960991207559411 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4445893089960887, "acc_stderr": 0.012691575792657115, "acc_norm": 0.4445893089960887, "acc_norm_stderr": 0.012691575792657115 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6544117647058824, "acc_stderr": 0.028888193103988633, "acc_norm": 0.6544117647058824, "acc_norm_stderr": 0.028888193103988633 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6421568627450981, "acc_stderr": 0.01939305840235544, "acc_norm": 0.6421568627450981, "acc_norm_stderr": 0.01939305840235544 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.710204081632653, "acc_stderr": 0.029043088683304328, "acc_norm": 0.710204081632653, "acc_norm_stderr": 0.029043088683304328 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.02587064676616913, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.02587064676616913 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.82, "acc_stderr": 0.038612291966536934, "acc_norm": 0.82, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.43818849449204406, "mc1_stderr": 0.017369236164404445, "mc2": 0.5997836138576584, "mc2_stderr": 0.015379030818687125 }, "harness|winogrande|5": { "acc": 0.7805840568271507, "acc_stderr": 0.01163126836060778 }, "harness|drop|3": { "em": 0.22902684563758388, "em_stderr": 0.0043033084382756255, "f1": 0.37819945469799016, "f1_stderr": 0.004228456430289263 }, "harness|gsm8k|5": { "acc": 0.20242608036391205, "acc_stderr": 0.011067792285006492 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Gryphe__MythoMist-7b
[ "region:us" ]
2023-11-23T18:36:44+00:00
{"pretty_name": "Evaluation run of Gryphe/MythoMist-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Gryphe/MythoMist-7b](https://huggingface.co/Gryphe/MythoMist-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Gryphe__MythoMist-7b_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T18:33:43.562121](https://huggingface.co/datasets/open-llm-leaderboard/details_Gryphe__MythoMist-7b_public/blob/main/results_2023-11-23T18-33-43.562121.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6193933424443757,\n \"acc_stderr\": 0.03253049842540975,\n \"acc_norm\": 0.6273757812096611,\n \"acc_norm_stderr\": 0.03322659183767027,\n \"mc1\": 0.43818849449204406,\n \"mc1_stderr\": 0.017369236164404445,\n \"mc2\": 0.5997836138576584,\n \"mc2_stderr\": 0.015379030818687125,\n \"em\": 0.22902684563758388,\n \"em_stderr\": 0.0043033084382756255,\n \"f1\": 0.37819945469799016,\n \"f1_stderr\": 0.004228456430289263\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6348122866894198,\n \"acc_stderr\": 0.014070265519268802,\n \"acc_norm\": 0.658703071672355,\n \"acc_norm_stderr\": 0.013855831287497728\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6441943835889266,\n \"acc_stderr\": 0.0047777825848177875,\n \"acc_norm\": 0.8354909380601474,\n \"acc_norm_stderr\": 0.003699791934754364\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.03692820767264866,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.03692820767264866\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467381,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467381\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.04165774775728763,\n \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.04165774775728763\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36772486772486773,\n \"acc_stderr\": 0.02483383982556242,\n \"acc_norm\": 0.36772486772486773,\n \"acc_norm_stderr\": 0.02483383982556242\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7645161290322581,\n \"acc_stderr\": 0.024137632429337717,\n \"acc_norm\": 0.7645161290322581,\n \"acc_norm_stderr\": 0.024137632429337717\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033446,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033446\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6128205128205129,\n \"acc_stderr\": 0.024697216930878937,\n \"acc_norm\": 0.6128205128205129,\n \"acc_norm_stderr\": 0.024697216930878937\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.034076320938540516,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.034076320938540516\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.02812597226565438,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.02812597226565438\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676177,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676177\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.04236511258094632,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.04236511258094632\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.035590395316173425,\n \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.035590395316173425\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.0376017800602662,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.0376017800602662\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.022509033937077805,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.022509033937077805\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n \"acc_stderr\": 0.01396439376989914,\n \"acc_norm\": 0.8122605363984674,\n \"acc_norm_stderr\": 0.01396439376989914\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.025070713719153186,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.025070713719153186\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39664804469273746,\n \"acc_stderr\": 0.01636135476982247,\n \"acc_norm\": 0.39664804469273746,\n \"acc_norm_stderr\": 0.01636135476982247\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.026336613469046626,\n \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.026336613469046626\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294674,\n \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294674\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4397163120567376,\n \"acc_stderr\": 0.02960991207559411,\n \"acc_norm\": 0.4397163120567376,\n \"acc_norm_stderr\": 0.02960991207559411\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4445893089960887,\n \"acc_stderr\": 0.012691575792657115,\n \"acc_norm\": 0.4445893089960887,\n \"acc_norm_stderr\": 0.012691575792657115\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.028888193103988633,\n \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.028888193103988633\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6421568627450981,\n \"acc_stderr\": 0.01939305840235544,\n \"acc_norm\": 0.6421568627450981,\n \"acc_norm_stderr\": 0.01939305840235544\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.43818849449204406,\n \"mc1_stderr\": 0.017369236164404445,\n \"mc2\": 0.5997836138576584,\n \"mc2_stderr\": 0.015379030818687125\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7805840568271507,\n \"acc_stderr\": 0.01163126836060778\n },\n \"harness|drop|3\": {\n \"em\": 0.22902684563758388,\n \"em_stderr\": 0.0043033084382756255,\n \"f1\": 0.37819945469799016,\n \"f1_stderr\": 0.004228456430289263\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20242608036391205,\n \"acc_stderr\": 0.011067792285006492\n }\n}\n```", "repo_url": "https://huggingface.co/Gryphe/MythoMist-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|drop|3_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-33-43.562121.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["**/details_harness|winogrande|5_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T18-33-43.562121.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T18_33_43.562121", "path": ["results_2023-11-23T18-33-43.562121.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T18-33-43.562121.parquet"]}]}]}
2023-11-23T18:37:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Gryphe/MythoMist-7b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Gryphe/MythoMist-7b on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T18:33:43.562121(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Gryphe/MythoMist-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Gryphe/MythoMist-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:33:43.562121(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Gryphe/MythoMist-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Gryphe/MythoMist-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:33:43.562121(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 18, 31, 167, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Gryphe/MythoMist-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Gryphe/MythoMist-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T18:33:43.562121(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
03cf232728e26588881cb9344a458b6b95efb445
# Dataset Card for "Impartial-GenAI-Dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cmglmsr/Impartial-GenAI-Dataset
[ "region:us" ]
2023-11-23T18:41:27+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31254, "num_examples": 3}], "download_size": 32194, "dataset_size": 31254}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-23T18:41:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Impartial-GenAI-Dataset" More Information needed
[ "# Dataset Card for \"Impartial-GenAI-Dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Impartial-GenAI-Dataset\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Impartial-GenAI-Dataset\"\n\nMore Information needed" ]
865a4ee73b39269e6c6ab1d3a1a4229abe4babd3
# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Korabbit/Llama-2-7b-chat-hf-afr-200step-v2](https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-v2_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T18:39:46.756166](https://huggingface.co/datasets/open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-v2_public/blob/main/results_2023-11-23T18-39-46.756166.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.4844173262075713, "acc_stderr": 0.03422515121320321, "acc_norm": 0.49096056822371376, "acc_norm_stderr": 0.03503280881820784, "mc1": 0.29008567931456547, "mc1_stderr": 0.01588623687420952, "mc2": 0.43685304669032105, "mc2_stderr": 0.015582536589566296, "em": 0.02160234899328859, "em_stderr": 0.0014888393578850528, "f1": 0.08137164429530211, "f1_stderr": 0.0020119444875776374 }, "harness|arc:challenge|25": { "acc": 0.4880546075085324, "acc_stderr": 0.014607220340597171, "acc_norm": 0.5179180887372014, "acc_norm_stderr": 0.014602005585490978 }, "harness|hellaswag|10": { "acc": 0.5889265086636128, "acc_stderr": 0.004910229643262741, "acc_norm": 0.7741485759808803, "acc_norm_stderr": 0.004172872282984212 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.04461960433384741, "acc_norm": 0.27, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.42962962962962964, "acc_stderr": 0.04276349494376599, "acc_norm": 0.42962962962962964, "acc_norm_stderr": 0.04276349494376599 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.4605263157894737, "acc_stderr": 0.04056242252249034, "acc_norm": 0.4605263157894737, "acc_norm_stderr": 0.04056242252249034 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.539622641509434, "acc_stderr": 0.030676096599389184, "acc_norm": 0.539622641509434, "acc_norm_stderr": 0.030676096599389184 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5208333333333334, "acc_stderr": 0.041775789507399935, "acc_norm": 0.5208333333333334, "acc_norm_stderr": 0.041775789507399935 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.4046242774566474, "acc_stderr": 0.03742461193887248, "acc_norm": 0.4046242774566474, "acc_norm_stderr": 0.03742461193887248 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.22549019607843138, "acc_stderr": 0.041583075330832865, "acc_norm": 0.22549019607843138, "acc_norm_stderr": 0.041583075330832865 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.40425531914893614, "acc_stderr": 0.032081157507886836, "acc_norm": 0.40425531914893614, "acc_norm_stderr": 0.032081157507886836 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.3684210526315789, "acc_stderr": 0.04537815354939392, "acc_norm": 0.3684210526315789, "acc_norm_stderr": 0.04537815354939392 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5103448275862069, "acc_stderr": 0.041657747757287644, "acc_norm": 0.5103448275862069, "acc_norm_stderr": 0.041657747757287644 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30423280423280424, "acc_stderr": 0.023695415009463087, "acc_norm": 0.30423280423280424, "acc_norm_stderr": 0.023695415009463087 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.24603174603174602, "acc_stderr": 0.03852273364924314, "acc_norm": 0.24603174603174602, "acc_norm_stderr": 0.03852273364924314 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5290322580645161, "acc_stderr": 0.028396016402761005, "acc_norm": 0.5290322580645161, "acc_norm_stderr": 0.028396016402761005 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.3793103448275862, "acc_stderr": 0.03413963805906235, "acc_norm": 0.3793103448275862, "acc_norm_stderr": 0.03413963805906235 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.593939393939394, "acc_stderr": 0.03834816355401181, "acc_norm": 0.593939393939394, "acc_norm_stderr": 0.03834816355401181 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.601010101010101, "acc_stderr": 0.034889016168527326, "acc_norm": 0.601010101010101, "acc_norm_stderr": 0.034889016168527326 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7150259067357513, "acc_stderr": 0.032577140777096614, "acc_norm": 0.7150259067357513, "acc_norm_stderr": 0.032577140777096614 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.4358974358974359, "acc_stderr": 0.02514180151117749, "acc_norm": 0.4358974358974359, "acc_norm_stderr": 0.02514180151117749 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26666666666666666, "acc_stderr": 0.026962424325073835, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.026962424325073835 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.42016806722689076, "acc_stderr": 0.03206183783236152, "acc_norm": 0.42016806722689076, "acc_norm_stderr": 0.03206183783236152 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2913907284768212, "acc_stderr": 0.03710185726119995, "acc_norm": 0.2913907284768212, "acc_norm_stderr": 0.03710185726119995 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.6770642201834862, "acc_stderr": 0.02004811592341532, "acc_norm": 0.6770642201834862, "acc_norm_stderr": 0.02004811592341532 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.3287037037037037, "acc_stderr": 0.032036140846700596, "acc_norm": 0.3287037037037037, "acc_norm_stderr": 0.032036140846700596 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6568627450980392, "acc_stderr": 0.033321399446680854, "acc_norm": 0.6568627450980392, "acc_norm_stderr": 0.033321399446680854 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.6666666666666666, "acc_stderr": 0.03068582059661079, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.03068582059661079 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5739910313901345, "acc_stderr": 0.03318833286217281, "acc_norm": 0.5739910313901345, "acc_norm_stderr": 0.03318833286217281 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5648854961832062, "acc_stderr": 0.04348208051644858, "acc_norm": 0.5648854961832062, "acc_norm_stderr": 0.04348208051644858 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6363636363636364, "acc_stderr": 0.043913262867240704, "acc_norm": 0.6363636363636364, "acc_norm_stderr": 0.043913262867240704 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.5925925925925926, "acc_stderr": 0.04750077341199984, "acc_norm": 0.5925925925925926, "acc_norm_stderr": 0.04750077341199984 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.558282208588957, "acc_stderr": 0.03901591825836184, "acc_norm": 0.558282208588957, "acc_norm_stderr": 0.03901591825836184 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3125, "acc_stderr": 0.043994650575715215, "acc_norm": 0.3125, "acc_norm_stderr": 0.043994650575715215 }, "harness|hendrycksTest-management|5": { "acc": 0.6893203883495146, "acc_stderr": 0.04582124160161551, "acc_norm": 0.6893203883495146, "acc_norm_stderr": 0.04582124160161551 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7136752136752137, "acc_stderr": 0.02961432369045665, "acc_norm": 0.7136752136752137, "acc_norm_stderr": 0.02961432369045665 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.49, "acc_stderr": 0.05024183937956911, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.6819923371647509, "acc_stderr": 0.01665348627561539, "acc_norm": 0.6819923371647509, "acc_norm_stderr": 0.01665348627561539 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5173410404624278, "acc_stderr": 0.02690290045866664, "acc_norm": 0.5173410404624278, "acc_norm_stderr": 0.02690290045866664 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2223463687150838, "acc_stderr": 0.013907189208156881, "acc_norm": 0.2223463687150838, "acc_norm_stderr": 0.013907189208156881 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5130718954248366, "acc_stderr": 0.028620130800700246, "acc_norm": 0.5130718954248366, "acc_norm_stderr": 0.028620130800700246 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5755627009646302, "acc_stderr": 0.028071928247946215, "acc_norm": 0.5755627009646302, "acc_norm_stderr": 0.028071928247946215 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5709876543209876, "acc_stderr": 0.027538925613470863, "acc_norm": 0.5709876543209876, "acc_norm_stderr": 0.027538925613470863 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.36524822695035464, "acc_stderr": 0.02872386385328128, "acc_norm": 0.36524822695035464, "acc_norm_stderr": 0.02872386385328128 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.34876140808344197, "acc_stderr": 0.01217203515712712, "acc_norm": 0.34876140808344197, "acc_norm_stderr": 0.01217203515712712 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.45588235294117646, "acc_stderr": 0.030254372573976684, "acc_norm": 0.45588235294117646, "acc_norm_stderr": 0.030254372573976684 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4934640522875817, "acc_stderr": 0.020226106567657807, "acc_norm": 0.4934640522875817, "acc_norm_stderr": 0.020226106567657807 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.5363636363636364, "acc_stderr": 0.04776449162396197, "acc_norm": 0.5363636363636364, "acc_norm_stderr": 0.04776449162396197 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5224489795918368, "acc_stderr": 0.031976941187136725, "acc_norm": 0.5224489795918368, "acc_norm_stderr": 0.031976941187136725 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6467661691542289, "acc_stderr": 0.03379790611796777, "acc_norm": 0.6467661691542289, "acc_norm_stderr": 0.03379790611796777 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-virology|5": { "acc": 0.42771084337349397, "acc_stderr": 0.038515976837185335, "acc_norm": 0.42771084337349397, "acc_norm_stderr": 0.038515976837185335 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7192982456140351, "acc_stderr": 0.03446296217088427, "acc_norm": 0.7192982456140351, "acc_norm_stderr": 0.03446296217088427 }, "harness|truthfulqa:mc|0": { "mc1": 0.29008567931456547, "mc1_stderr": 0.01588623687420952, "mc2": 0.43685304669032105, "mc2_stderr": 0.015582536589566296 }, "harness|winogrande|5": { "acc": 0.7190213101815311, "acc_stderr": 0.012632541095875824 }, "harness|drop|3": { "em": 0.02160234899328859, "em_stderr": 0.0014888393578850528, "f1": 0.08137164429530211, "f1_stderr": 0.0020119444875776374 }, "harness|gsm8k|5": { "acc": 0.07884761182714177, "acc_stderr": 0.007423390519873241 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-v2
[ "region:us" ]
2023-11-23T18:42:54+00:00
{"pretty_name": "Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [Korabbit/Llama-2-7b-chat-hf-afr-200step-v2](https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-v2_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T18:39:46.756166](https://huggingface.co/datasets/open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-200step-v2_public/blob/main/results_2023-11-23T18-39-46.756166.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4844173262075713,\n \"acc_stderr\": 0.03422515121320321,\n \"acc_norm\": 0.49096056822371376,\n \"acc_norm_stderr\": 0.03503280881820784,\n \"mc1\": 0.29008567931456547,\n \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.43685304669032105,\n \"mc2_stderr\": 0.015582536589566296,\n \"em\": 0.02160234899328859,\n \"em_stderr\": 0.0014888393578850528,\n \"f1\": 0.08137164429530211,\n \"f1_stderr\": 0.0020119444875776374\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4880546075085324,\n \"acc_stderr\": 0.014607220340597171,\n \"acc_norm\": 0.5179180887372014,\n \"acc_norm_stderr\": 0.014602005585490978\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5889265086636128,\n \"acc_stderr\": 0.004910229643262741,\n \"acc_norm\": 0.7741485759808803,\n \"acc_norm_stderr\": 0.004172872282984212\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.42962962962962964,\n \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.42962962962962964,\n \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4605263157894737,\n \"acc_stderr\": 0.04056242252249034,\n \"acc_norm\": 0.4605263157894737,\n \"acc_norm_stderr\": 0.04056242252249034\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.539622641509434,\n \"acc_stderr\": 0.030676096599389184,\n \"acc_norm\": 0.539622641509434,\n \"acc_norm_stderr\": 0.030676096599389184\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5208333333333334,\n \"acc_stderr\": 0.041775789507399935,\n \"acc_norm\": 0.5208333333333334,\n \"acc_norm_stderr\": 0.041775789507399935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4046242774566474,\n \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.4046242774566474,\n \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.041583075330832865,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.041583075330832865\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.40425531914893614,\n \"acc_stderr\": 0.032081157507886836,\n \"acc_norm\": 0.40425531914893614,\n \"acc_norm_stderr\": 0.032081157507886836\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3684210526315789,\n \"acc_stderr\": 0.04537815354939392,\n \"acc_norm\": 0.3684210526315789,\n \"acc_norm_stderr\": 0.04537815354939392\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.041657747757287644,\n \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.041657747757287644\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30423280423280424,\n \"acc_stderr\": 0.023695415009463087,\n \"acc_norm\": 0.30423280423280424,\n \"acc_norm_stderr\": 0.023695415009463087\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.03852273364924314,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.03852273364924314\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5290322580645161,\n \"acc_stderr\": 0.028396016402761005,\n \"acc_norm\": 0.5290322580645161,\n \"acc_norm_stderr\": 0.028396016402761005\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3793103448275862,\n \"acc_stderr\": 0.03413963805906235,\n \"acc_norm\": 0.3793103448275862,\n \"acc_norm_stderr\": 0.03413963805906235\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.593939393939394,\n \"acc_stderr\": 0.03834816355401181,\n \"acc_norm\": 0.593939393939394,\n \"acc_norm_stderr\": 0.03834816355401181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.601010101010101,\n \"acc_stderr\": 0.034889016168527326,\n \"acc_norm\": 0.601010101010101,\n \"acc_norm_stderr\": 0.034889016168527326\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7150259067357513,\n \"acc_stderr\": 0.032577140777096614,\n \"acc_norm\": 0.7150259067357513,\n \"acc_norm_stderr\": 0.032577140777096614\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4358974358974359,\n \"acc_stderr\": 0.02514180151117749,\n \"acc_norm\": 0.4358974358974359,\n \"acc_norm_stderr\": 0.02514180151117749\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.026962424325073835,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.026962424325073835\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.42016806722689076,\n \"acc_stderr\": 0.03206183783236152,\n \"acc_norm\": 0.42016806722689076,\n \"acc_norm_stderr\": 0.03206183783236152\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2913907284768212,\n \"acc_stderr\": 0.03710185726119995,\n \"acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.03710185726119995\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6770642201834862,\n \"acc_stderr\": 0.02004811592341532,\n \"acc_norm\": 0.6770642201834862,\n \"acc_norm_stderr\": 0.02004811592341532\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3287037037037037,\n \"acc_stderr\": 0.032036140846700596,\n \"acc_norm\": 0.3287037037037037,\n \"acc_norm_stderr\": 0.032036140846700596\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6568627450980392,\n \"acc_stderr\": 0.033321399446680854,\n \"acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.033321399446680854\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03068582059661079,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03068582059661079\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5739910313901345,\n \"acc_stderr\": 0.03318833286217281,\n \"acc_norm\": 0.5739910313901345,\n \"acc_norm_stderr\": 0.03318833286217281\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.04348208051644858,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.04348208051644858\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.043913262867240704,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.043913262867240704\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04750077341199984,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04750077341199984\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.558282208588957,\n \"acc_stderr\": 0.03901591825836184,\n \"acc_norm\": 0.558282208588957,\n \"acc_norm_stderr\": 0.03901591825836184\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6893203883495146,\n \"acc_stderr\": 0.04582124160161551,\n \"acc_norm\": 0.6893203883495146,\n \"acc_norm_stderr\": 0.04582124160161551\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7136752136752137,\n \"acc_stderr\": 0.02961432369045665,\n \"acc_norm\": 0.7136752136752137,\n \"acc_norm_stderr\": 0.02961432369045665\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6819923371647509,\n \"acc_stderr\": 0.01665348627561539,\n \"acc_norm\": 0.6819923371647509,\n \"acc_norm_stderr\": 0.01665348627561539\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5173410404624278,\n \"acc_stderr\": 0.02690290045866664,\n \"acc_norm\": 0.5173410404624278,\n \"acc_norm_stderr\": 0.02690290045866664\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2223463687150838,\n \"acc_stderr\": 0.013907189208156881,\n \"acc_norm\": 0.2223463687150838,\n \"acc_norm_stderr\": 0.013907189208156881\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5130718954248366,\n \"acc_stderr\": 0.028620130800700246,\n \"acc_norm\": 0.5130718954248366,\n \"acc_norm_stderr\": 0.028620130800700246\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5755627009646302,\n \"acc_stderr\": 0.028071928247946215,\n \"acc_norm\": 0.5755627009646302,\n \"acc_norm_stderr\": 0.028071928247946215\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5709876543209876,\n \"acc_stderr\": 0.027538925613470863,\n \"acc_norm\": 0.5709876543209876,\n \"acc_norm_stderr\": 0.027538925613470863\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.36524822695035464,\n \"acc_stderr\": 0.02872386385328128,\n \"acc_norm\": 0.36524822695035464,\n \"acc_norm_stderr\": 0.02872386385328128\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.34876140808344197,\n \"acc_stderr\": 0.01217203515712712,\n \"acc_norm\": 0.34876140808344197,\n \"acc_norm_stderr\": 0.01217203515712712\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.45588235294117646,\n \"acc_stderr\": 0.030254372573976684,\n \"acc_norm\": 0.45588235294117646,\n \"acc_norm_stderr\": 0.030254372573976684\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4934640522875817,\n \"acc_stderr\": 0.020226106567657807,\n \"acc_norm\": 0.4934640522875817,\n \"acc_norm_stderr\": 0.020226106567657807\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5363636363636364,\n \"acc_stderr\": 0.04776449162396197,\n \"acc_norm\": 0.5363636363636364,\n \"acc_norm_stderr\": 0.04776449162396197\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5224489795918368,\n \"acc_stderr\": 0.031976941187136725,\n \"acc_norm\": 0.5224489795918368,\n \"acc_norm_stderr\": 0.031976941187136725\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6467661691542289,\n \"acc_stderr\": 0.03379790611796777,\n \"acc_norm\": 0.6467661691542289,\n \"acc_norm_stderr\": 0.03379790611796777\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42771084337349397,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.42771084337349397,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7192982456140351,\n \"acc_stderr\": 0.03446296217088427,\n \"acc_norm\": 0.7192982456140351,\n \"acc_norm_stderr\": 0.03446296217088427\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29008567931456547,\n \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.43685304669032105,\n \"mc2_stderr\": 0.015582536589566296\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7190213101815311,\n \"acc_stderr\": 0.012632541095875824\n },\n \"harness|drop|3\": {\n \"em\": 0.02160234899328859,\n \"em_stderr\": 0.0014888393578850528,\n \"f1\": 0.08137164429530211,\n \"f1_stderr\": 0.0020119444875776374\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07884761182714177,\n \"acc_stderr\": 0.007423390519873241\n }\n}\n```", "repo_url": "https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-200step-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|drop|3_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-39-46.756166.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["**/details_harness|winogrande|5_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T18-39-46.756166.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T18_39_46.756166", "path": ["results_2023-11-23T18-39-46.756166.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T18-39-46.756166.parquet"]}]}]}
2023-11-23T18:43:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T18:39:46.756166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-v2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:39:46.756166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-v2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:39:46.756166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 31, 31, 180, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-200step-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T18:39:46.756166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
7cbdcf1aa256062e112d8e1cd2efc726010a900a
This dataset is roughly 27k examples of erotica stories which I've fed through GPT-3.5-turbo-16k to obtain a summary, writing prompt, and tags as a response. I've filtered out all the refusals, and deleted a fair ammount of "GPT-isms". I'd still like to go through this again to prune any remaining low quality responses I've missed, but I think this is a good start. Most of the context size comes from the stories themselves, not the responses. Please consider supporting my Patreon (https://www.patreon.com/openerotica). I'm only asking for about tree fiddy and it all goes toward helping me create more models and datasets.
openerotica/erotica-analysis
[ "license:apache-2.0", "region:us" ]
2023-11-23T18:47:00+00:00
{"license": "apache-2.0"}
2023-12-26T23:00:59+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This dataset is roughly 27k examples of erotica stories which I've fed through GPT-3.5-turbo-16k to obtain a summary, writing prompt, and tags as a response. I've filtered out all the refusals, and deleted a fair ammount of "GPT-isms". I'd still like to go through this again to prune any remaining low quality responses I've missed, but I think this is a good start. Most of the context size comes from the stories themselves, not the responses. Please consider supporting my Patreon (URL I'm only asking for about tree fiddy and it all goes toward helping me create more models and datasets.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
488a639f7fff846a70e5e5a71d8017016bd4371d
# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Korabbit/Llama-2-7b-chat-hf-afr-100step-v2](https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-100step-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-100step-v2_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T18:49:40.471713](https://huggingface.co/datasets/open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-100step-v2_public/blob/main/results_2023-11-23T18-49-40.471713.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.4839603798631096, "acc_stderr": 0.034233481703847386, "acc_norm": 0.4904194469293332, "acc_norm_stderr": 0.0350370635088906, "mc1": 0.2974296205630355, "mc1_stderr": 0.016002651487361005, "mc2": 0.4518194385088943, "mc2_stderr": 0.01565368058265292, "em": 0.04016359060402685, "em_stderr": 0.002010733562468151, "f1": 0.10108536073825498, "f1_stderr": 0.0024087765856211545 }, "harness|arc:challenge|25": { "acc": 0.492320819112628, "acc_stderr": 0.01460966744089257, "acc_norm": 0.5264505119453925, "acc_norm_stderr": 0.014590931358120167 }, "harness|hellaswag|10": { "acc": 0.5955984863572994, "acc_stderr": 0.004897728370737241, "acc_norm": 0.782513443537144, "acc_norm_stderr": 0.0041169313831573495 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542129, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542129 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.42962962962962964, "acc_stderr": 0.04276349494376599, "acc_norm": 0.42962962962962964, "acc_norm_stderr": 0.04276349494376599 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.47368421052631576, "acc_stderr": 0.04063302731486671, "acc_norm": 0.47368421052631576, "acc_norm_stderr": 0.04063302731486671 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.53, "acc_stderr": 0.050161355804659205, "acc_norm": 0.53, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5471698113207547, "acc_stderr": 0.03063562795796182, "acc_norm": 0.5471698113207547, "acc_norm_stderr": 0.03063562795796182 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5208333333333334, "acc_stderr": 0.041775789507399935, "acc_norm": 0.5208333333333334, "acc_norm_stderr": 0.041775789507399935 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.4046242774566474, "acc_stderr": 0.03742461193887248, "acc_norm": 0.4046242774566474, "acc_norm_stderr": 0.03742461193887248 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.22549019607843138, "acc_stderr": 0.041583075330832865, "acc_norm": 0.22549019607843138, "acc_norm_stderr": 0.041583075330832865 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4085106382978723, "acc_stderr": 0.03213418026701576, "acc_norm": 0.4085106382978723, "acc_norm_stderr": 0.03213418026701576 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.37719298245614036, "acc_stderr": 0.045595221419582166, "acc_norm": 0.37719298245614036, "acc_norm_stderr": 0.045595221419582166 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.503448275862069, "acc_stderr": 0.041665675771015785, "acc_norm": 0.503448275862069, "acc_norm_stderr": 0.041665675771015785 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30158730158730157, "acc_stderr": 0.023636975996101806, "acc_norm": 0.30158730158730157, "acc_norm_stderr": 0.023636975996101806 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.24603174603174602, "acc_stderr": 0.03852273364924314, "acc_norm": 0.24603174603174602, "acc_norm_stderr": 0.03852273364924314 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.36, "acc_stderr": 0.048241815132442176, "acc_norm": 0.36, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5225806451612903, "acc_stderr": 0.02841498501970786, "acc_norm": 0.5225806451612903, "acc_norm_stderr": 0.02841498501970786 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.35960591133004927, "acc_stderr": 0.033764582465095665, "acc_norm": 0.35960591133004927, "acc_norm_stderr": 0.033764582465095665 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.5878787878787879, "acc_stderr": 0.03843566993588717, "acc_norm": 0.5878787878787879, "acc_norm_stderr": 0.03843566993588717 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6060606060606061, "acc_stderr": 0.034812853382329624, "acc_norm": 0.6060606060606061, "acc_norm_stderr": 0.034812853382329624 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7150259067357513, "acc_stderr": 0.032577140777096614, "acc_norm": 0.7150259067357513, "acc_norm_stderr": 0.032577140777096614 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.4256410256410256, "acc_stderr": 0.02506909438729654, "acc_norm": 0.4256410256410256, "acc_norm_stderr": 0.02506909438729654 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26296296296296295, "acc_stderr": 0.02684205787383371, "acc_norm": 0.26296296296296295, "acc_norm_stderr": 0.02684205787383371 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.42016806722689076, "acc_stderr": 0.03206183783236152, "acc_norm": 0.42016806722689076, "acc_norm_stderr": 0.03206183783236152 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2847682119205298, "acc_stderr": 0.036848815213890225, "acc_norm": 0.2847682119205298, "acc_norm_stderr": 0.036848815213890225 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.6788990825688074, "acc_stderr": 0.02001814977273375, "acc_norm": 0.6788990825688074, "acc_norm_stderr": 0.02001814977273375 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.0321495214780275, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.0321495214780275 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6666666666666666, "acc_stderr": 0.03308611113236434, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.03308611113236434 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.6751054852320675, "acc_stderr": 0.03048603938910529, "acc_norm": 0.6751054852320675, "acc_norm_stderr": 0.03048603938910529 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5695067264573991, "acc_stderr": 0.033231973029429394, "acc_norm": 0.5695067264573991, "acc_norm_stderr": 0.033231973029429394 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5725190839694656, "acc_stderr": 0.04338920305792401, "acc_norm": 0.5725190839694656, "acc_norm_stderr": 0.04338920305792401 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6363636363636364, "acc_stderr": 0.043913262867240704, "acc_norm": 0.6363636363636364, "acc_norm_stderr": 0.043913262867240704 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.5925925925925926, "acc_stderr": 0.04750077341199984, "acc_norm": 0.5925925925925926, "acc_norm_stderr": 0.04750077341199984 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.558282208588957, "acc_stderr": 0.03901591825836184, "acc_norm": 0.558282208588957, "acc_norm_stderr": 0.03901591825836184 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3125, "acc_stderr": 0.043994650575715215, "acc_norm": 0.3125, "acc_norm_stderr": 0.043994650575715215 }, "harness|hendrycksTest-management|5": { "acc": 0.6699029126213593, "acc_stderr": 0.04656147110012351, "acc_norm": 0.6699029126213593, "acc_norm_stderr": 0.04656147110012351 }, "harness|hendrycksTest-marketing|5": { "acc": 0.717948717948718, "acc_stderr": 0.029480360549541194, "acc_norm": 0.717948717948718, "acc_norm_stderr": 0.029480360549541194 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.49, "acc_stderr": 0.05024183937956911, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.6781609195402298, "acc_stderr": 0.0167063814150579, "acc_norm": 0.6781609195402298, "acc_norm_stderr": 0.0167063814150579 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5144508670520231, "acc_stderr": 0.026907849856282542, "acc_norm": 0.5144508670520231, "acc_norm_stderr": 0.026907849856282542 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2212290502793296, "acc_stderr": 0.013882164598887275, "acc_norm": 0.2212290502793296, "acc_norm_stderr": 0.013882164598887275 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5163398692810458, "acc_stderr": 0.02861462475280544, "acc_norm": 0.5163398692810458, "acc_norm_stderr": 0.02861462475280544 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5755627009646302, "acc_stderr": 0.028071928247946215, "acc_norm": 0.5755627009646302, "acc_norm_stderr": 0.028071928247946215 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5679012345679012, "acc_stderr": 0.027563010971606676, "acc_norm": 0.5679012345679012, "acc_norm_stderr": 0.027563010971606676 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.3723404255319149, "acc_stderr": 0.028838921471251458, "acc_norm": 0.3723404255319149, "acc_norm_stderr": 0.028838921471251458 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.3494132985658409, "acc_stderr": 0.012177306252786686, "acc_norm": 0.3494132985658409, "acc_norm_stderr": 0.012177306252786686 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.45588235294117646, "acc_stderr": 0.03025437257397668, "acc_norm": 0.45588235294117646, "acc_norm_stderr": 0.03025437257397668 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4852941176470588, "acc_stderr": 0.020219083895133924, "acc_norm": 0.4852941176470588, "acc_norm_stderr": 0.020219083895133924 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.5363636363636364, "acc_stderr": 0.04776449162396197, "acc_norm": 0.5363636363636364, "acc_norm_stderr": 0.04776449162396197 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5265306122448979, "acc_stderr": 0.03196412734523272, "acc_norm": 0.5265306122448979, "acc_norm_stderr": 0.03196412734523272 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6467661691542289, "acc_stderr": 0.03379790611796777, "acc_norm": 0.6467661691542289, "acc_norm_stderr": 0.03379790611796777 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.72, "acc_stderr": 0.045126085985421276, "acc_norm": 0.72, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-virology|5": { "acc": 0.43373493975903615, "acc_stderr": 0.03858158940685517, "acc_norm": 0.43373493975903615, "acc_norm_stderr": 0.03858158940685517 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7192982456140351, "acc_stderr": 0.03446296217088427, "acc_norm": 0.7192982456140351, "acc_norm_stderr": 0.03446296217088427 }, "harness|truthfulqa:mc|0": { "mc1": 0.2974296205630355, "mc1_stderr": 0.016002651487361005, "mc2": 0.4518194385088943, "mc2_stderr": 0.01565368058265292 }, "harness|winogrande|5": { "acc": 0.7229676400947119, "acc_stderr": 0.012577891015342412 }, "harness|drop|3": { "em": 0.04016359060402685, "em_stderr": 0.002010733562468151, "f1": 0.10108536073825498, "f1_stderr": 0.0024087765856211545 }, "harness|gsm8k|5": { "acc": 0.08491281273692192, "acc_stderr": 0.007678212824450797 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-100step-v2
[ "region:us" ]
2023-11-23T18:52:43+00:00
{"pretty_name": "Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-100step-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [Korabbit/Llama-2-7b-chat-hf-afr-100step-v2](https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-100step-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-100step-v2_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T18:49:40.471713](https://huggingface.co/datasets/open-llm-leaderboard/details_Korabbit__Llama-2-7b-chat-hf-afr-100step-v2_public/blob/main/results_2023-11-23T18-49-40.471713.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4839603798631096,\n \"acc_stderr\": 0.034233481703847386,\n \"acc_norm\": 0.4904194469293332,\n \"acc_norm_stderr\": 0.0350370635088906,\n \"mc1\": 0.2974296205630355,\n \"mc1_stderr\": 0.016002651487361005,\n \"mc2\": 0.4518194385088943,\n \"mc2_stderr\": 0.01565368058265292,\n \"em\": 0.04016359060402685,\n \"em_stderr\": 0.002010733562468151,\n \"f1\": 0.10108536073825498,\n \"f1_stderr\": 0.0024087765856211545\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.492320819112628,\n \"acc_stderr\": 0.01460966744089257,\n \"acc_norm\": 0.5264505119453925,\n \"acc_norm_stderr\": 0.014590931358120167\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5955984863572994,\n \"acc_stderr\": 0.004897728370737241,\n \"acc_norm\": 0.782513443537144,\n \"acc_norm_stderr\": 0.0041169313831573495\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.42962962962962964,\n \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.42962962962962964,\n \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04063302731486671,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04063302731486671\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5471698113207547,\n \"acc_stderr\": 0.03063562795796182,\n \"acc_norm\": 0.5471698113207547,\n \"acc_norm_stderr\": 0.03063562795796182\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5208333333333334,\n \"acc_stderr\": 0.041775789507399935,\n \"acc_norm\": 0.5208333333333334,\n \"acc_norm_stderr\": 0.041775789507399935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4046242774566474,\n \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.4046242774566474,\n \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.041583075330832865,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.041583075330832865\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4085106382978723,\n \"acc_stderr\": 0.03213418026701576,\n \"acc_norm\": 0.4085106382978723,\n \"acc_norm_stderr\": 0.03213418026701576\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.37719298245614036,\n \"acc_stderr\": 0.045595221419582166,\n \"acc_norm\": 0.37719298245614036,\n \"acc_norm_stderr\": 0.045595221419582166\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.041665675771015785,\n \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.041665675771015785\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30158730158730157,\n \"acc_stderr\": 0.023636975996101806,\n \"acc_norm\": 0.30158730158730157,\n \"acc_norm_stderr\": 0.023636975996101806\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.03852273364924314,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.03852273364924314\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5225806451612903,\n \"acc_stderr\": 0.02841498501970786,\n \"acc_norm\": 0.5225806451612903,\n \"acc_norm_stderr\": 0.02841498501970786\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.35960591133004927,\n \"acc_stderr\": 0.033764582465095665,\n \"acc_norm\": 0.35960591133004927,\n \"acc_norm_stderr\": 0.033764582465095665\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5878787878787879,\n \"acc_stderr\": 0.03843566993588717,\n \"acc_norm\": 0.5878787878787879,\n \"acc_norm_stderr\": 0.03843566993588717\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6060606060606061,\n \"acc_stderr\": 0.034812853382329624,\n \"acc_norm\": 0.6060606060606061,\n \"acc_norm_stderr\": 0.034812853382329624\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7150259067357513,\n \"acc_stderr\": 0.032577140777096614,\n \"acc_norm\": 0.7150259067357513,\n \"acc_norm_stderr\": 0.032577140777096614\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4256410256410256,\n \"acc_stderr\": 0.02506909438729654,\n \"acc_norm\": 0.4256410256410256,\n \"acc_norm_stderr\": 0.02506909438729654\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.42016806722689076,\n \"acc_stderr\": 0.03206183783236152,\n \"acc_norm\": 0.42016806722689076,\n \"acc_norm_stderr\": 0.03206183783236152\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2847682119205298,\n \"acc_stderr\": 0.036848815213890225,\n \"acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.036848815213890225\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6788990825688074,\n \"acc_stderr\": 0.02001814977273375,\n \"acc_norm\": 0.6788990825688074,\n \"acc_norm_stderr\": 0.02001814977273375\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.0321495214780275,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.0321495214780275\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03308611113236434,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03308611113236434\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6751054852320675,\n \"acc_stderr\": 0.03048603938910529,\n \"acc_norm\": 0.6751054852320675,\n \"acc_norm_stderr\": 0.03048603938910529\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5695067264573991,\n \"acc_stderr\": 0.033231973029429394,\n \"acc_norm\": 0.5695067264573991,\n \"acc_norm_stderr\": 0.033231973029429394\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5725190839694656,\n \"acc_stderr\": 0.04338920305792401,\n \"acc_norm\": 0.5725190839694656,\n \"acc_norm_stderr\": 0.04338920305792401\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.043913262867240704,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.043913262867240704\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04750077341199984,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04750077341199984\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.558282208588957,\n \"acc_stderr\": 0.03901591825836184,\n \"acc_norm\": 0.558282208588957,\n \"acc_norm_stderr\": 0.03901591825836184\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6699029126213593,\n \"acc_stderr\": 0.04656147110012351,\n \"acc_norm\": 0.6699029126213593,\n \"acc_norm_stderr\": 0.04656147110012351\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.717948717948718,\n \"acc_stderr\": 0.029480360549541194,\n \"acc_norm\": 0.717948717948718,\n \"acc_norm_stderr\": 0.029480360549541194\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6781609195402298,\n \"acc_stderr\": 0.0167063814150579,\n \"acc_norm\": 0.6781609195402298,\n \"acc_norm_stderr\": 0.0167063814150579\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5144508670520231,\n \"acc_stderr\": 0.026907849856282542,\n \"acc_norm\": 0.5144508670520231,\n \"acc_norm_stderr\": 0.026907849856282542\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2212290502793296,\n \"acc_stderr\": 0.013882164598887275,\n \"acc_norm\": 0.2212290502793296,\n \"acc_norm_stderr\": 0.013882164598887275\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5163398692810458,\n \"acc_stderr\": 0.02861462475280544,\n \"acc_norm\": 0.5163398692810458,\n \"acc_norm_stderr\": 0.02861462475280544\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5755627009646302,\n \"acc_stderr\": 0.028071928247946215,\n \"acc_norm\": 0.5755627009646302,\n \"acc_norm_stderr\": 0.028071928247946215\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5679012345679012,\n \"acc_stderr\": 0.027563010971606676,\n \"acc_norm\": 0.5679012345679012,\n \"acc_norm_stderr\": 0.027563010971606676\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3723404255319149,\n \"acc_stderr\": 0.028838921471251458,\n \"acc_norm\": 0.3723404255319149,\n \"acc_norm_stderr\": 0.028838921471251458\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3494132985658409,\n \"acc_stderr\": 0.012177306252786686,\n \"acc_norm\": 0.3494132985658409,\n \"acc_norm_stderr\": 0.012177306252786686\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.45588235294117646,\n \"acc_stderr\": 0.03025437257397668,\n \"acc_norm\": 0.45588235294117646,\n \"acc_norm_stderr\": 0.03025437257397668\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4852941176470588,\n \"acc_stderr\": 0.020219083895133924,\n \"acc_norm\": 0.4852941176470588,\n \"acc_norm_stderr\": 0.020219083895133924\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5363636363636364,\n \"acc_stderr\": 0.04776449162396197,\n \"acc_norm\": 0.5363636363636364,\n \"acc_norm_stderr\": 0.04776449162396197\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5265306122448979,\n \"acc_stderr\": 0.03196412734523272,\n \"acc_norm\": 0.5265306122448979,\n \"acc_norm_stderr\": 0.03196412734523272\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6467661691542289,\n \"acc_stderr\": 0.03379790611796777,\n \"acc_norm\": 0.6467661691542289,\n \"acc_norm_stderr\": 0.03379790611796777\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.43373493975903615,\n \"acc_stderr\": 0.03858158940685517,\n \"acc_norm\": 0.43373493975903615,\n \"acc_norm_stderr\": 0.03858158940685517\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7192982456140351,\n \"acc_stderr\": 0.03446296217088427,\n \"acc_norm\": 0.7192982456140351,\n \"acc_norm_stderr\": 0.03446296217088427\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2974296205630355,\n \"mc1_stderr\": 0.016002651487361005,\n \"mc2\": 0.4518194385088943,\n \"mc2_stderr\": 0.01565368058265292\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7229676400947119,\n \"acc_stderr\": 0.012577891015342412\n },\n \"harness|drop|3\": {\n \"em\": 0.04016359060402685,\n \"em_stderr\": 0.002010733562468151,\n \"f1\": 0.10108536073825498,\n \"f1_stderr\": 0.0024087765856211545\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08491281273692192,\n \"acc_stderr\": 0.007678212824450797\n }\n}\n```", "repo_url": "https://huggingface.co/Korabbit/Llama-2-7b-chat-hf-afr-100step-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|drop|3_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T18-49-40.471713.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["**/details_harness|winogrande|5_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T18-49-40.471713.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T18_49_40.471713", "path": ["results_2023-11-23T18-49-40.471713.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T18-49-40.471713.parquet"]}]}]}
2023-11-23T18:53:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T18:49:40.471713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-100step-v2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:49:40.471713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-100step-v2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T18:49:40.471713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 31, 31, 180, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Korabbit/Llama-2-7b-chat-hf-afr-100step-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Korabbit/Llama-2-7b-chat-hf-afr-100step-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T18:49:40.471713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
08db5560f8401620f0cc2150558ec33137af9e54
# Dataset Card for Evaluation run of speechlessai/speechless-mistral-7b-dare-0.85 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/speechlessai/speechless-mistral-7b-dare-0.85 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [speechlessai/speechless-mistral-7b-dare-0.85](https://huggingface.co/speechlessai/speechless-mistral-7b-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_speechlessai__speechless-mistral-7b-dare-0.85_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T19:00:24.923358](https://huggingface.co/datasets/open-llm-leaderboard/details_speechlessai__speechless-mistral-7b-dare-0.85_public/blob/main/results_2023-11-23T19-00-24.923358.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6369591583509581, "acc_stderr": 0.03209160104558823, "acc_norm": 0.6455116611491236, "acc_norm_stderr": 0.0327770298036848, "mc1": 0.35128518971848227, "mc1_stderr": 0.016711358163544403, "mc2": 0.5067853019722414, "mc2_stderr": 0.015079174812087311, "em": 0.034395973154362415, "em_stderr": 0.0018663495487686948, "f1": 0.10012898489932888, "f1_stderr": 0.002256552299533148 }, "harness|arc:challenge|25": { "acc": 0.606655290102389, "acc_stderr": 0.014275101465693026, "acc_norm": 0.6331058020477816, "acc_norm_stderr": 0.014084133118104298 }, "harness|hellaswag|10": { "acc": 0.6532563234415455, "acc_stderr": 0.004749606196363343, "acc_norm": 0.8493328022306313, "acc_norm_stderr": 0.003569930987961452 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6776315789473685, "acc_stderr": 0.03803510248351585, "acc_norm": 0.6776315789473685, "acc_norm_stderr": 0.03803510248351585 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.75, "acc_stderr": 0.03621034121889507, "acc_norm": 0.75, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.05016135580465919, "acc_norm": 0.47, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6589595375722543, "acc_stderr": 0.03614665424180826, "acc_norm": 0.6589595375722543, "acc_norm_stderr": 0.03614665424180826 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5574468085106383, "acc_stderr": 0.03246956919789958, "acc_norm": 0.5574468085106383, "acc_norm_stderr": 0.03246956919789958 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4473684210526316, "acc_stderr": 0.04677473004491199, "acc_norm": 0.4473684210526316, "acc_norm_stderr": 0.04677473004491199 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5517241379310345, "acc_stderr": 0.04144311810878151, "acc_norm": 0.5517241379310345, "acc_norm_stderr": 0.04144311810878151 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.02533120243894444, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.02533120243894444 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04444444444444449, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.41, "acc_stderr": 0.04943110704237102, "acc_norm": 0.41, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7774193548387097, "acc_stderr": 0.023664216671642525, "acc_norm": 0.7774193548387097, "acc_norm_stderr": 0.023664216671642525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.032876667586034906, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.032876667586034906 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.797979797979798, "acc_stderr": 0.028606204289229876, "acc_norm": 0.797979797979798, "acc_norm_stderr": 0.028606204289229876 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033463, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033463 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.658974358974359, "acc_stderr": 0.02403548967633507, "acc_norm": 0.658974358974359, "acc_norm_stderr": 0.02403548967633507 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3592592592592593, "acc_stderr": 0.029252905927251976, "acc_norm": 0.3592592592592593, "acc_norm_stderr": 0.029252905927251976 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6554621848739496, "acc_stderr": 0.030868682604121622, "acc_norm": 0.6554621848739496, "acc_norm_stderr": 0.030868682604121622 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.038796870240733264, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.038796870240733264 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.818348623853211, "acc_stderr": 0.016530617409266878, "acc_norm": 0.818348623853211, "acc_norm_stderr": 0.016530617409266878 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5324074074074074, "acc_stderr": 0.03402801581358966, "acc_norm": 0.5324074074074074, "acc_norm_stderr": 0.03402801581358966 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.028626547912437406, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.028626547912437406 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7932489451476793, "acc_stderr": 0.02636165166838909, "acc_norm": 0.7932489451476793, "acc_norm_stderr": 0.02636165166838909 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.695067264573991, "acc_stderr": 0.030898610882477515, "acc_norm": 0.695067264573991, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.03641297081313729, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.03641297081313729 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8181818181818182, "acc_stderr": 0.03520893951097653, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.03520893951097653 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.0401910747255735, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.0401910747255735 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7914110429447853, "acc_stderr": 0.031921934489347235, "acc_norm": 0.7914110429447853, "acc_norm_stderr": 0.031921934489347235 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.04726835553719099, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.04726835553719099 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.04058042015646034, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.04058042015646034 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8888888888888888, "acc_stderr": 0.020588491316092375, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.020588491316092375 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8173690932311622, "acc_stderr": 0.013816335389973136, "acc_norm": 0.8173690932311622, "acc_norm_stderr": 0.013816335389973136 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7341040462427746, "acc_stderr": 0.02378620325550829, "acc_norm": 0.7341040462427746, "acc_norm_stderr": 0.02378620325550829 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3564245810055866, "acc_stderr": 0.016018239710513395, "acc_norm": 0.3564245810055866, "acc_norm_stderr": 0.016018239710513395 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.761437908496732, "acc_stderr": 0.02440439492808787, "acc_norm": 0.761437908496732, "acc_norm_stderr": 0.02440439492808787 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6977491961414791, "acc_stderr": 0.02608270069539966, "acc_norm": 0.6977491961414791, "acc_norm_stderr": 0.02608270069539966 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.024569223600460845, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.024569223600460845 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48936170212765956, "acc_stderr": 0.029820747191422473, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.029820747191422473 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.455019556714472, "acc_stderr": 0.012718456618701766, "acc_norm": 0.455019556714472, "acc_norm_stderr": 0.012718456618701766 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6727941176470589, "acc_stderr": 0.02850145286039655, "acc_norm": 0.6727941176470589, "acc_norm_stderr": 0.02850145286039655 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6601307189542484, "acc_stderr": 0.01916241858862356, "acc_norm": 0.6601307189542484, "acc_norm_stderr": 0.01916241858862356 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.04494290866252089, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.04494290866252089 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.028123429335142783, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.028123429335142783 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8507462686567164, "acc_stderr": 0.02519692987482706, "acc_norm": 0.8507462686567164, "acc_norm_stderr": 0.02519692987482706 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.03684529491774709, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774709 }, "harness|hendrycksTest-virology|5": { "acc": 0.4879518072289157, "acc_stderr": 0.03891364495835821, "acc_norm": 0.4879518072289157, "acc_norm_stderr": 0.03891364495835821 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.35128518971848227, "mc1_stderr": 0.016711358163544403, "mc2": 0.5067853019722414, "mc2_stderr": 0.015079174812087311 }, "harness|winogrande|5": { "acc": 0.7932123125493291, "acc_stderr": 0.011382566829235803 }, "harness|drop|3": { "em": 0.034395973154362415, "em_stderr": 0.0018663495487686948, "f1": 0.10012898489932888, "f1_stderr": 0.002256552299533148 }, "harness|gsm8k|5": { "acc": 0.19863532979529946, "acc_stderr": 0.010989694978252754 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_speechlessai__speechless-mistral-7b-dare-0.85
[ "region:us" ]
2023-11-23T19:03:28+00:00
{"pretty_name": "Evaluation run of speechlessai/speechless-mistral-7b-dare-0.85", "dataset_summary": "Dataset automatically created during the evaluation run of model [speechlessai/speechless-mistral-7b-dare-0.85](https://huggingface.co/speechlessai/speechless-mistral-7b-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_speechlessai__speechless-mistral-7b-dare-0.85_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T19:00:24.923358](https://huggingface.co/datasets/open-llm-leaderboard/details_speechlessai__speechless-mistral-7b-dare-0.85_public/blob/main/results_2023-11-23T19-00-24.923358.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6369591583509581,\n \"acc_stderr\": 0.03209160104558823,\n \"acc_norm\": 0.6455116611491236,\n \"acc_norm_stderr\": 0.0327770298036848,\n \"mc1\": 0.35128518971848227,\n \"mc1_stderr\": 0.016711358163544403,\n \"mc2\": 0.5067853019722414,\n \"mc2_stderr\": 0.015079174812087311,\n \"em\": 0.034395973154362415,\n \"em_stderr\": 0.0018663495487686948,\n \"f1\": 0.10012898489932888,\n \"f1_stderr\": 0.002256552299533148\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.606655290102389,\n \"acc_stderr\": 0.014275101465693026,\n \"acc_norm\": 0.6331058020477816,\n \"acc_norm_stderr\": 0.014084133118104298\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6532563234415455,\n \"acc_stderr\": 0.004749606196363343,\n \"acc_norm\": 0.8493328022306313,\n \"acc_norm_stderr\": 0.003569930987961452\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878151,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878151\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894444,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894444\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.023664216671642525,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.023664216671642525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.032876667586034906,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.032876667586034906\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229876,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229876\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033463,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033463\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633507,\n \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633507\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121622,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121622\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.818348623853211,\n \"acc_stderr\": 0.016530617409266878,\n \"acc_norm\": 0.818348623853211,\n \"acc_norm_stderr\": 0.016530617409266878\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437406,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437406\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7932489451476793,\n \"acc_stderr\": 0.02636165166838909,\n \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.02636165166838909\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097653,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097653\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.04058042015646034,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.04058042015646034\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n \"acc_stderr\": 0.013816335389973136,\n \"acc_norm\": 0.8173690932311622,\n \"acc_norm_stderr\": 0.013816335389973136\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.02378620325550829,\n \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.02378620325550829\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3564245810055866,\n \"acc_stderr\": 0.016018239710513395,\n \"acc_norm\": 0.3564245810055866,\n \"acc_norm_stderr\": 0.016018239710513395\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.02440439492808787,\n \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.02440439492808787\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.455019556714472,\n \"acc_stderr\": 0.012718456618701766,\n \"acc_norm\": 0.455019556714472,\n \"acc_norm_stderr\": 0.012718456618701766\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.02850145286039655,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.02850145286039655\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.01916241858862356,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.01916241858862356\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.04494290866252089,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.04494290866252089\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.02519692987482706,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.02519692987482706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35128518971848227,\n \"mc1_stderr\": 0.016711358163544403,\n \"mc2\": 0.5067853019722414,\n \"mc2_stderr\": 0.015079174812087311\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7932123125493291,\n \"acc_stderr\": 0.011382566829235803\n },\n \"harness|drop|3\": {\n \"em\": 0.034395973154362415,\n \"em_stderr\": 0.0018663495487686948,\n \"f1\": 0.10012898489932888,\n \"f1_stderr\": 0.002256552299533148\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19863532979529946,\n \"acc_stderr\": 0.010989694978252754\n }\n}\n```", "repo_url": "https://huggingface.co/speechlessai/speechless-mistral-7b-dare-0.85", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|drop|3_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-00-24.923358.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["**/details_harness|winogrande|5_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T19-00-24.923358.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T19_00_24.923358", "path": ["results_2023-11-23T19-00-24.923358.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T19-00-24.923358.parquet"]}]}]}
2023-11-23T19:04:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of speechlessai/speechless-mistral-7b-dare-0.85 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model speechlessai/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T19:00:24.923358(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of speechlessai/speechless-mistral-7b-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:00:24.923358(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of speechlessai/speechless-mistral-7b-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:00:24.923358(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 26, 31, 175, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of speechlessai/speechless-mistral-7b-dare-0.85## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T19:00:24.923358(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
0e2f0fda3029240aa1a50e3248bfc860267bc1fd
### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
anand-kamble/Custom_common_voice_dataset_using_RVC
[ "size_categories:10K<n<100K", "source_datasets:common_voice_v13", "language:hi", "license:cc0-1.0", "region:us" ]
2023-11-23T19:05:25+00:00
{"language": ["hi"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "source_datasets": ["common_voice_v13"], "pretty_name": "Custom Common Voice", "viewer": true}
2023-11-23T20:05:47+00:00
[]
[ "hi" ]
TAGS #size_categories-10K<n<100K #source_datasets-common_voice_v13 #language-Hindi #license-cc0-1.0 #region-us
### Licensing Information Public Domain, CC-0
[ "### Licensing Information\n\nPublic Domain, CC-0" ]
[ "TAGS\n#size_categories-10K<n<100K #source_datasets-common_voice_v13 #language-Hindi #license-cc0-1.0 #region-us \n", "### Licensing Information\n\nPublic Domain, CC-0" ]
[ 45, 11 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #source_datasets-common_voice_v13 #language-Hindi #license-cc0-1.0 #region-us \n### Licensing Information\n\nPublic Domain, CC-0" ]
3f927a98faed708c2ce5cd9e151b15da599981e1
# Dataset Card for Evaluation run of souvik0306/mistral_7b_2epoch_norobots ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/souvik0306/mistral_7b_2epoch_norobots - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [souvik0306/mistral_7b_2epoch_norobots](https://huggingface.co/souvik0306/mistral_7b_2epoch_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_souvik0306__mistral_7b_2epoch_norobots_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T19:18:06.825101](https://huggingface.co/datasets/open-llm-leaderboard/details_souvik0306__mistral_7b_2epoch_norobots_public/blob/main/results_2023-11-23T19-18-06.825101.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6330598338387582, "acc_stderr": 0.03226570631734972, "acc_norm": 0.6423858579070316, "acc_norm_stderr": 0.0329680806753492, "mc1": 0.27906976744186046, "mc1_stderr": 0.015702107090627897, "mc2": 0.4261552372929774, "mc2_stderr": 0.014190532295151336, "em": 0.0016778523489932886, "em_stderr": 0.00041913301788268467, "f1": 0.062363674496644275, "f1_stderr": 0.0013875357781658866 }, "harness|arc:challenge|25": { "acc": 0.5708191126279863, "acc_stderr": 0.014464085894870653, "acc_norm": 0.6100682593856656, "acc_norm_stderr": 0.014252959848892894 }, "harness|hellaswag|10": { "acc": 0.6281617207727545, "acc_stderr": 0.004823078145064964, "acc_norm": 0.833698466440948, "acc_norm_stderr": 0.0037159010850549875 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.04153948404742398, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.04153948404742398 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6644736842105263, "acc_stderr": 0.03842498559395268, "acc_norm": 0.6644736842105263, "acc_norm_stderr": 0.03842498559395268 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.59, "acc_stderr": 0.04943110704237102, "acc_norm": 0.59, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7018867924528301, "acc_stderr": 0.02815283794249387, "acc_norm": 0.7018867924528301, "acc_norm_stderr": 0.02815283794249387 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7083333333333334, "acc_stderr": 0.038009680605548594, "acc_norm": 0.7083333333333334, "acc_norm_stderr": 0.038009680605548594 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.04292346959909283, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5914893617021276, "acc_stderr": 0.032134180267015755, "acc_norm": 0.5914893617021276, "acc_norm_stderr": 0.032134180267015755 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.0470070803355104, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.0470070803355104 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.041227371113703316, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.041227371113703316 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3862433862433862, "acc_stderr": 0.025075981767601688, "acc_norm": 0.3862433862433862, "acc_norm_stderr": 0.025075981767601688 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.373015873015873, "acc_stderr": 0.04325506042017086, "acc_norm": 0.373015873015873, "acc_norm_stderr": 0.04325506042017086 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7645161290322581, "acc_stderr": 0.02413763242933771, "acc_norm": 0.7645161290322581, "acc_norm_stderr": 0.02413763242933771 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5320197044334976, "acc_stderr": 0.035107665979592154, "acc_norm": 0.5320197044334976, "acc_norm_stderr": 0.035107665979592154 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621505, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.03287666758603491, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.03287666758603491 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7727272727272727, "acc_stderr": 0.029857515673386414, "acc_norm": 0.7727272727272727, "acc_norm_stderr": 0.029857515673386414 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8860103626943006, "acc_stderr": 0.022935144053919443, "acc_norm": 0.8860103626943006, "acc_norm_stderr": 0.022935144053919443 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6564102564102564, "acc_stderr": 0.024078696580635474, "acc_norm": 0.6564102564102564, "acc_norm_stderr": 0.024078696580635474 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35555555555555557, "acc_stderr": 0.02918571494985741, "acc_norm": 0.35555555555555557, "acc_norm_stderr": 0.02918571494985741 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6596638655462185, "acc_stderr": 0.030778057422931673, "acc_norm": 0.6596638655462185, "acc_norm_stderr": 0.030778057422931673 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.03879687024073327, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.03879687024073327 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8256880733944955, "acc_stderr": 0.016265675632010354, "acc_norm": 0.8256880733944955, "acc_norm_stderr": 0.016265675632010354 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5277777777777778, "acc_stderr": 0.0340470532865388, "acc_norm": 0.5277777777777778, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.028626547912437406, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.028626547912437406 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7721518987341772, "acc_stderr": 0.02730348459906943, "acc_norm": 0.7721518987341772, "acc_norm_stderr": 0.02730348459906943 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6995515695067265, "acc_stderr": 0.03076935200822915, "acc_norm": 0.6995515695067265, "acc_norm_stderr": 0.03076935200822915 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8015267175572519, "acc_stderr": 0.034981493854624714, "acc_norm": 0.8015267175572519, "acc_norm_stderr": 0.034981493854624714 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7851239669421488, "acc_stderr": 0.03749492448709698, "acc_norm": 0.7851239669421488, "acc_norm_stderr": 0.03749492448709698 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.040191074725573483, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.49107142857142855, "acc_stderr": 0.04745033255489123, "acc_norm": 0.49107142857142855, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.8058252427184466, "acc_stderr": 0.03916667762822585, "acc_norm": 0.8058252427184466, "acc_norm_stderr": 0.03916667762822585 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.021586494001281386, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.021586494001281386 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.80970625798212, "acc_stderr": 0.014036945850381387, "acc_norm": 0.80970625798212, "acc_norm_stderr": 0.014036945850381387 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7196531791907514, "acc_stderr": 0.02418242749657761, "acc_norm": 0.7196531791907514, "acc_norm_stderr": 0.02418242749657761 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3039106145251397, "acc_stderr": 0.01538284558758452, "acc_norm": 0.3039106145251397, "acc_norm_stderr": 0.01538284558758452 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7516339869281046, "acc_stderr": 0.02473998135511359, "acc_norm": 0.7516339869281046, "acc_norm_stderr": 0.02473998135511359 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6977491961414791, "acc_stderr": 0.026082700695399662, "acc_norm": 0.6977491961414791, "acc_norm_stderr": 0.026082700695399662 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.02456922360046085, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.02456922360046085 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46808510638297873, "acc_stderr": 0.029766675075873866, "acc_norm": 0.46808510638297873, "acc_norm_stderr": 0.029766675075873866 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4426336375488918, "acc_stderr": 0.012685906538206247, "acc_norm": 0.4426336375488918, "acc_norm_stderr": 0.012685906538206247 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6727941176470589, "acc_stderr": 0.028501452860396553, "acc_norm": 0.6727941176470589, "acc_norm_stderr": 0.028501452860396553 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6650326797385621, "acc_stderr": 0.019094228167000318, "acc_norm": 0.6650326797385621, "acc_norm_stderr": 0.019094228167000318 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.028123429335142777, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.028123429335142777 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8258706467661692, "acc_stderr": 0.026814951200421603, "acc_norm": 0.8258706467661692, "acc_norm_stderr": 0.026814951200421603 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5602409638554217, "acc_stderr": 0.03864139923699122, "acc_norm": 0.5602409638554217, "acc_norm_stderr": 0.03864139923699122 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.029913127232368036, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.029913127232368036 }, "harness|truthfulqa:mc|0": { "mc1": 0.27906976744186046, "mc1_stderr": 0.015702107090627897, "mc2": 0.4261552372929774, "mc2_stderr": 0.014190532295151336 }, "harness|winogrande|5": { "acc": 0.7908445146014207, "acc_stderr": 0.01143045004588158 }, "harness|drop|3": { "em": 0.0016778523489932886, "em_stderr": 0.00041913301788268467, "f1": 0.062363674496644275, "f1_stderr": 0.0013875357781658866 }, "harness|gsm8k|5": { "acc": 0.16982562547384383, "acc_stderr": 0.010342572360861205 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_souvik0306__mistral_7b_2epoch_norobots
[ "region:us" ]
2023-11-23T19:21:07+00:00
{"pretty_name": "Evaluation run of souvik0306/mistral_7b_2epoch_norobots", "dataset_summary": "Dataset automatically created during the evaluation run of model [souvik0306/mistral_7b_2epoch_norobots](https://huggingface.co/souvik0306/mistral_7b_2epoch_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_souvik0306__mistral_7b_2epoch_norobots_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T19:18:06.825101](https://huggingface.co/datasets/open-llm-leaderboard/details_souvik0306__mistral_7b_2epoch_norobots_public/blob/main/results_2023-11-23T19-18-06.825101.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6330598338387582,\n \"acc_stderr\": 0.03226570631734972,\n \"acc_norm\": 0.6423858579070316,\n \"acc_norm_stderr\": 0.0329680806753492,\n \"mc1\": 0.27906976744186046,\n \"mc1_stderr\": 0.015702107090627897,\n \"mc2\": 0.4261552372929774,\n \"mc2_stderr\": 0.014190532295151336,\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788268467,\n \"f1\": 0.062363674496644275,\n \"f1_stderr\": 0.0013875357781658866\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5708191126279863,\n \"acc_stderr\": 0.014464085894870653,\n \"acc_norm\": 0.6100682593856656,\n \"acc_norm_stderr\": 0.014252959848892894\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6281617207727545,\n \"acc_stderr\": 0.004823078145064964,\n \"acc_norm\": 0.833698466440948,\n \"acc_norm_stderr\": 0.0037159010850549875\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.0470070803355104,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.0470070803355104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.041227371113703316,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.041227371113703316\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3862433862433862,\n \"acc_stderr\": 0.025075981767601688,\n \"acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.025075981767601688\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n \"acc_stderr\": 0.04325506042017086,\n \"acc_norm\": 0.373015873015873,\n \"acc_norm_stderr\": 0.04325506042017086\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7645161290322581,\n \"acc_stderr\": 0.02413763242933771,\n \"acc_norm\": 0.7645161290322581,\n \"acc_norm_stderr\": 0.02413763242933771\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5320197044334976,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.5320197044334976,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.03287666758603491,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.03287666758603491\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386414,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386414\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635474,\n \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635474\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35555555555555557,\n \"acc_stderr\": 0.02918571494985741,\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.02918571494985741\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.03879687024073327,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.03879687024073327\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8256880733944955,\n \"acc_stderr\": 0.016265675632010354,\n \"acc_norm\": 0.8256880733944955,\n \"acc_norm_stderr\": 0.016265675632010354\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437406,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437406\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7721518987341772,\n \"acc_stderr\": 0.02730348459906943,\n \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.02730348459906943\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822915,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822915\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709698,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709698\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822585,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822585\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281386,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281386\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.80970625798212,\n \"acc_stderr\": 0.014036945850381387,\n \"acc_norm\": 0.80970625798212,\n \"acc_norm_stderr\": 0.014036945850381387\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.02418242749657761,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.02418242749657761\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3039106145251397,\n \"acc_stderr\": 0.01538284558758452,\n \"acc_norm\": 0.3039106145251397,\n \"acc_norm_stderr\": 0.01538284558758452\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n \"acc_stderr\": 0.026082700695399662,\n \"acc_norm\": 0.6977491961414791,\n \"acc_norm_stderr\": 0.026082700695399662\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.02456922360046085,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.02456922360046085\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4426336375488918,\n \"acc_stderr\": 0.012685906538206247,\n \"acc_norm\": 0.4426336375488918,\n \"acc_norm_stderr\": 0.012685906538206247\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396553,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396553\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6650326797385621,\n \"acc_stderr\": 0.019094228167000318,\n \"acc_norm\": 0.6650326797385621,\n \"acc_norm_stderr\": 0.019094228167000318\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142777,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142777\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.27906976744186046,\n \"mc1_stderr\": 0.015702107090627897,\n \"mc2\": 0.4261552372929774,\n \"mc2_stderr\": 0.014190532295151336\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7908445146014207,\n \"acc_stderr\": 0.01143045004588158\n },\n \"harness|drop|3\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788268467,\n \"f1\": 0.062363674496644275,\n \"f1_stderr\": 0.0013875357781658866\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16982562547384383,\n \"acc_stderr\": 0.010342572360861205\n }\n}\n```", "repo_url": "https://huggingface.co/souvik0306/mistral_7b_2epoch_norobots", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|drop|3_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-18-06.825101.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["**/details_harness|winogrande|5_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T19-18-06.825101.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T19_18_06.825101", "path": ["results_2023-11-23T19-18-06.825101.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T19-18-06.825101.parquet"]}]}]}
2023-11-23T19:21:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of souvik0306/mistral_7b_2epoch_norobots ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model souvik0306/mistral_7b_2epoch_norobots on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T19:18:06.825101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of souvik0306/mistral_7b_2epoch_norobots", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model souvik0306/mistral_7b_2epoch_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:18:06.825101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of souvik0306/mistral_7b_2epoch_norobots", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model souvik0306/mistral_7b_2epoch_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:18:06.825101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 27, 31, 176, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of souvik0306/mistral_7b_2epoch_norobots## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model souvik0306/mistral_7b_2epoch_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T19:18:06.825101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
caf793af70c9e290d9a16ff0d604c2b42733f729
# Dataset Card for Evaluation run of uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85](https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_uukuguy__CollectiveCognition-v1.1-Mistral-7B-dare-0.85_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T19:19:22.420919](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__CollectiveCognition-v1.1-Mistral-7B-dare-0.85_public/blob/main/results_2023-11-23T19-19-22.420919.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6373539881235634, "acc_stderr": 0.032200043467933794, "acc_norm": 0.6462425671540708, "acc_norm_stderr": 0.032891781056948864, "mc1": 0.3023255813953488, "mc1_stderr": 0.016077509266133026, "mc2": 0.44867041308885225, "mc2_stderr": 0.014511741253113358, "em": 0.001572986577181208, "em_stderr": 0.00040584511324177333, "f1": 0.06318477348993282, "f1_stderr": 0.0013946687452644612 }, "harness|arc:challenge|25": { "acc": 0.5802047781569966, "acc_stderr": 0.014422181226303026, "acc_norm": 0.6100682593856656, "acc_norm_stderr": 0.014252959848892893 }, "harness|hellaswag|10": { "acc": 0.6451902011551484, "acc_stderr": 0.004774778180345194, "acc_norm": 0.8430591515634336, "acc_norm_stderr": 0.0036300159898963996 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.04153948404742398, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.04153948404742398 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.038781398887976104, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.038781398887976104 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7222222222222222, "acc_stderr": 0.037455547914624555, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.037455547914624555 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6473988439306358, "acc_stderr": 0.036430371689585475, "acc_norm": 0.6473988439306358, "acc_norm_stderr": 0.036430371689585475 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6, "acc_stderr": 0.03202563076101735, "acc_norm": 0.6, "acc_norm_stderr": 0.03202563076101735 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5263157894736842, "acc_stderr": 0.046970851366478626, "acc_norm": 0.5263157894736842, "acc_norm_stderr": 0.046970851366478626 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5793103448275863, "acc_stderr": 0.0411391498118926, "acc_norm": 0.5793103448275863, "acc_norm_stderr": 0.0411391498118926 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3888888888888889, "acc_stderr": 0.025107425481137282, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.025107425481137282 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3968253968253968, "acc_stderr": 0.043758884927270605, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.043758884927270605 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7741935483870968, "acc_stderr": 0.023785577884181012, "acc_norm": 0.7741935483870968, "acc_norm_stderr": 0.023785577884181012 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5320197044334976, "acc_stderr": 0.035107665979592154, "acc_norm": 0.5320197044334976, "acc_norm_stderr": 0.035107665979592154 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7818181818181819, "acc_stderr": 0.032250781083062896, "acc_norm": 0.7818181818181819, "acc_norm_stderr": 0.032250781083062896 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7777777777777778, "acc_stderr": 0.029620227874790486, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.029620227874790486 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8756476683937824, "acc_stderr": 0.02381447708659355, "acc_norm": 0.8756476683937824, "acc_norm_stderr": 0.02381447708659355 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131143, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131143 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6680672268907563, "acc_stderr": 0.03058869701378364, "acc_norm": 0.6680672268907563, "acc_norm_stderr": 0.03058869701378364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.038615575462551684, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.038615575462551684 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8220183486238533, "acc_stderr": 0.016399436366612927, "acc_norm": 0.8220183486238533, "acc_norm_stderr": 0.016399436366612927 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5509259259259259, "acc_stderr": 0.033922384053216174, "acc_norm": 0.5509259259259259, "acc_norm_stderr": 0.033922384053216174 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.028626547912437406, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.028626547912437406 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7679324894514767, "acc_stderr": 0.027479744550808514, "acc_norm": 0.7679324894514767, "acc_norm_stderr": 0.027479744550808514 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.031024411740572213, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.031024411740572213 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7862595419847328, "acc_stderr": 0.0359546161177469, "acc_norm": 0.7862595419847328, "acc_norm_stderr": 0.0359546161177469 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252627, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7914110429447853, "acc_stderr": 0.03192193448934724, "acc_norm": 0.7914110429447853, "acc_norm_stderr": 0.03192193448934724 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5, "acc_stderr": 0.04745789978762494, "acc_norm": 0.5, "acc_norm_stderr": 0.04745789978762494 }, "harness|hendrycksTest-management|5": { "acc": 0.8252427184466019, "acc_stderr": 0.03760178006026621, "acc_norm": 0.8252427184466019, "acc_norm_stderr": 0.03760178006026621 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8717948717948718, "acc_stderr": 0.02190190511507333, "acc_norm": 0.8717948717948718, "acc_norm_stderr": 0.02190190511507333 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8173690932311622, "acc_stderr": 0.013816335389973133, "acc_norm": 0.8173690932311622, "acc_norm_stderr": 0.013816335389973133 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7109826589595376, "acc_stderr": 0.02440517393578323, "acc_norm": 0.7109826589595376, "acc_norm_stderr": 0.02440517393578323 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.32737430167597764, "acc_stderr": 0.015694238967737383, "acc_norm": 0.32737430167597764, "acc_norm_stderr": 0.015694238967737383 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7549019607843137, "acc_stderr": 0.024630048979824775, "acc_norm": 0.7549019607843137, "acc_norm_stderr": 0.024630048979824775 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.025583062489984824, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.025583062489984824 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.024569223600460845, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.024569223600460845 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4858156028368794, "acc_stderr": 0.02981549448368206, "acc_norm": 0.4858156028368794, "acc_norm_stderr": 0.02981549448368206 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.45371577574967403, "acc_stderr": 0.012715404841277738, "acc_norm": 0.45371577574967403, "acc_norm_stderr": 0.012715404841277738 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6617647058823529, "acc_stderr": 0.028739328513983572, "acc_norm": 0.6617647058823529, "acc_norm_stderr": 0.028739328513983572 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6764705882352942, "acc_stderr": 0.018926082916083383, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.018926082916083383 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784593, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784593 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454125, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454125 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.0377525168068637, "acc_norm": 0.83, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160896, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160896 }, "harness|truthfulqa:mc|0": { "mc1": 0.3023255813953488, "mc1_stderr": 0.016077509266133026, "mc2": 0.44867041308885225, "mc2_stderr": 0.014511741253113358 }, "harness|winogrande|5": { "acc": 0.7884767166535123, "acc_stderr": 0.011477747684223194 }, "harness|drop|3": { "em": 0.001572986577181208, "em_stderr": 0.00040584511324177333, "f1": 0.06318477348993282, "f1_stderr": 0.0013946687452644612 }, "harness|gsm8k|5": { "acc": 0.18953752843062927, "acc_stderr": 0.010795837931896386 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_uukuguy__CollectiveCognition-v1.1-Mistral-7B-dare-0.85
[ "region:us" ]
2023-11-23T19:22:22+00:00
{"pretty_name": "Evaluation run of uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85](https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__CollectiveCognition-v1.1-Mistral-7B-dare-0.85_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T19:19:22.420919](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__CollectiveCognition-v1.1-Mistral-7B-dare-0.85_public/blob/main/results_2023-11-23T19-19-22.420919.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6373539881235634,\n \"acc_stderr\": 0.032200043467933794,\n \"acc_norm\": 0.6462425671540708,\n \"acc_norm_stderr\": 0.032891781056948864,\n \"mc1\": 0.3023255813953488,\n \"mc1_stderr\": 0.016077509266133026,\n \"mc2\": 0.44867041308885225,\n \"mc2_stderr\": 0.014511741253113358,\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177333,\n \"f1\": 0.06318477348993282,\n \"f1_stderr\": 0.0013946687452644612\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5802047781569966,\n \"acc_stderr\": 0.014422181226303026,\n \"acc_norm\": 0.6100682593856656,\n \"acc_norm_stderr\": 0.014252959848892893\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6451902011551484,\n \"acc_stderr\": 0.004774778180345194,\n \"acc_norm\": 0.8430591515634336,\n \"acc_norm_stderr\": 0.0036300159898963996\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.038781398887976104,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.038781398887976104\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.036430371689585475,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.036430371689585475\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03202563076101735,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03202563076101735\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.025107425481137282,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.025107425481137282\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181012,\n \"acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181012\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5320197044334976,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.5320197044334976,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.032250781083062896,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.032250781083062896\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.029620227874790486,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.029620227874790486\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.02381447708659355,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.02381447708659355\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8220183486238533,\n \"acc_stderr\": 0.016399436366612927,\n \"acc_norm\": 0.8220183486238533,\n \"acc_norm_stderr\": 0.016399436366612927\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437406,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437406\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7679324894514767,\n \"acc_stderr\": 0.027479744550808514,\n \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.027479744550808514\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.031024411740572213,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.031024411740572213\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.03192193448934724,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.03192193448934724\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n \"acc_stderr\": 0.013816335389973133,\n \"acc_norm\": 0.8173690932311622,\n \"acc_norm_stderr\": 0.013816335389973133\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32737430167597764,\n \"acc_stderr\": 0.015694238967737383,\n \"acc_norm\": 0.32737430167597764,\n \"acc_norm_stderr\": 0.015694238967737383\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.024630048979824775,\n \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.024630048979824775\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.025583062489984824,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.025583062489984824\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45371577574967403,\n \"acc_stderr\": 0.012715404841277738,\n \"acc_norm\": 0.45371577574967403,\n \"acc_norm_stderr\": 0.012715404841277738\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.028739328513983572,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.028739328513983572\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083383,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083383\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160896,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160896\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3023255813953488,\n \"mc1_stderr\": 0.016077509266133026,\n \"mc2\": 0.44867041308885225,\n \"mc2_stderr\": 0.014511741253113358\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7884767166535123,\n \"acc_stderr\": 0.011477747684223194\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177333,\n \"f1\": 0.06318477348993282,\n \"f1_stderr\": 0.0013946687452644612\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.18953752843062927,\n \"acc_stderr\": 0.010795837931896386\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|drop|3_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-19-22.420919.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["**/details_harness|winogrande|5_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T19-19-22.420919.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T19_19_22.420919", "path": ["results_2023-11-23T19-19-22.420919.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T19-19-22.420919.parquet"]}]}]}
2023-11-23T19:23:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T19:19:22.420919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:19:22.420919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:19:22.420919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 33, 31, 182, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T19:19:22.420919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
85e394fc18f8f46a2614ff17733733ae52d400d5
# Dataset Card for Evaluation run of ZoidBB/unraveled-7b-a1 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/ZoidBB/unraveled-7b-a1 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [ZoidBB/unraveled-7b-a1](https://huggingface.co/ZoidBB/unraveled-7b-a1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ZoidBB__unraveled-7b-a1_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T19:22:53.071269](https://huggingface.co/datasets/open-llm-leaderboard/details_ZoidBB__unraveled-7b-a1_public/blob/main/results_2023-11-23T19-22-53.071269.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.627016601295214, "acc_stderr": 0.03235619334922418, "acc_norm": 0.6365794887378388, "acc_norm_stderr": 0.03306927475699416, "mc1": 0.28151774785801714, "mc1_stderr": 0.01574402724825605, "mc2": 0.42228384526614654, "mc2_stderr": 0.014152177395393957, "em": 0.0017827181208053692, "em_stderr": 0.00043200973460387867, "f1": 0.06056837248322149, "f1_stderr": 0.0013671084143061485 }, "harness|arc:challenge|25": { "acc": 0.5648464163822525, "acc_stderr": 0.014487986197186045, "acc_norm": 0.5981228668941979, "acc_norm_stderr": 0.014327268614578274 }, "harness|hellaswag|10": { "acc": 0.635929097789285, "acc_stderr": 0.004801852881329739, "acc_norm": 0.8280223063134834, "acc_norm_stderr": 0.0037658983649388736 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5925925925925926, "acc_stderr": 0.04244633238353227, "acc_norm": 0.5925925925925926, "acc_norm_stderr": 0.04244633238353227 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6381578947368421, "acc_stderr": 0.039105257528497236, "acc_norm": 0.6381578947368421, "acc_norm_stderr": 0.039105257528497236 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6943396226415094, "acc_stderr": 0.028353298073322666, "acc_norm": 0.6943396226415094, "acc_norm_stderr": 0.028353298073322666 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7291666666666666, "acc_stderr": 0.03716177437566017, "acc_norm": 0.7291666666666666, "acc_norm_stderr": 0.03716177437566017 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.53, "acc_stderr": 0.05016135580465919, "acc_norm": 0.53, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.4, "acc_stderr": 0.04923659639173309, "acc_norm": 0.4, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146268, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146268 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5862068965517241, "acc_stderr": 0.04104269211806232, "acc_norm": 0.5862068965517241, "acc_norm_stderr": 0.04104269211806232 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4074074074074074, "acc_stderr": 0.02530590624159063, "acc_norm": 0.4074074074074074, "acc_norm_stderr": 0.02530590624159063 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42063492063492064, "acc_stderr": 0.04415438226743744, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.04415438226743744 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7580645161290323, "acc_stderr": 0.024362599693031096, "acc_norm": 0.7580645161290323, "acc_norm_stderr": 0.024362599693031096 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4827586206896552, "acc_stderr": 0.035158955511657, "acc_norm": 0.4827586206896552, "acc_norm_stderr": 0.035158955511657 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7757575757575758, "acc_stderr": 0.03256866661681102, "acc_norm": 0.7757575757575758, "acc_norm_stderr": 0.03256866661681102 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.02886977846026704, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.02886977846026704 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8704663212435233, "acc_stderr": 0.024233532297758733, "acc_norm": 0.8704663212435233, "acc_norm_stderr": 0.024233532297758733 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.029318203645206865, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.029318203645206865 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.634453781512605, "acc_stderr": 0.031282177063684614, "acc_norm": 0.634453781512605, "acc_norm_stderr": 0.031282177063684614 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.039439666991836285, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.039439666991836285 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8073394495412844, "acc_stderr": 0.016909276884936073, "acc_norm": 0.8073394495412844, "acc_norm_stderr": 0.016909276884936073 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.033674621388960775, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.033674621388960775 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.0286265479124374, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.0286265479124374 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.026750826994676173, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.026750826994676173 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6681614349775785, "acc_stderr": 0.031602951437766785, "acc_norm": 0.6681614349775785, "acc_norm_stderr": 0.031602951437766785 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7709923664122137, "acc_stderr": 0.036853466317118506, "acc_norm": 0.7709923664122137, "acc_norm_stderr": 0.036853466317118506 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7129629629629629, "acc_stderr": 0.04373313040914761, "acc_norm": 0.7129629629629629, "acc_norm_stderr": 0.04373313040914761 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7975460122699386, "acc_stderr": 0.03157065078911901, "acc_norm": 0.7975460122699386, "acc_norm_stderr": 0.03157065078911901 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.41964285714285715, "acc_stderr": 0.04684099321077106, "acc_norm": 0.41964285714285715, "acc_norm_stderr": 0.04684099321077106 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.039891398595317706, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.039891398595317706 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8888888888888888, "acc_stderr": 0.020588491316092368, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.020588491316092368 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8148148148148148, "acc_stderr": 0.013890862162876163, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.013890862162876163 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.708092485549133, "acc_stderr": 0.024476994076247333, "acc_norm": 0.708092485549133, "acc_norm_stderr": 0.024476994076247333 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3039106145251397, "acc_stderr": 0.015382845587584517, "acc_norm": 0.3039106145251397, "acc_norm_stderr": 0.015382845587584517 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7450980392156863, "acc_stderr": 0.02495418432487991, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.02495418432487991 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.684887459807074, "acc_stderr": 0.026385273703464485, "acc_norm": 0.684887459807074, "acc_norm_stderr": 0.026385273703464485 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7129629629629629, "acc_stderr": 0.02517104191530968, "acc_norm": 0.7129629629629629, "acc_norm_stderr": 0.02517104191530968 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4716312056737589, "acc_stderr": 0.029779450957303062, "acc_norm": 0.4716312056737589, "acc_norm_stderr": 0.029779450957303062 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4426336375488918, "acc_stderr": 0.012685906538206247, "acc_norm": 0.4426336375488918, "acc_norm_stderr": 0.012685906538206247 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6838235294117647, "acc_stderr": 0.028245687391462923, "acc_norm": 0.6838235294117647, "acc_norm_stderr": 0.028245687391462923 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6437908496732027, "acc_stderr": 0.0193733324207245, "acc_norm": 0.6437908496732027, "acc_norm_stderr": 0.0193733324207245 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7142857142857143, "acc_stderr": 0.0289205832206756, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.0289205832206756 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8507462686567164, "acc_stderr": 0.02519692987482706, "acc_norm": 0.8507462686567164, "acc_norm_stderr": 0.02519692987482706 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.88, "acc_stderr": 0.03265986323710906, "acc_norm": 0.88, "acc_norm_stderr": 0.03265986323710906 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.02991312723236804, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.02991312723236804 }, "harness|truthfulqa:mc|0": { "mc1": 0.28151774785801714, "mc1_stderr": 0.01574402724825605, "mc2": 0.42228384526614654, "mc2_stderr": 0.014152177395393957 }, "harness|winogrande|5": { "acc": 0.7719021310181531, "acc_stderr": 0.011793015817663592 }, "harness|drop|3": { "em": 0.0017827181208053692, "em_stderr": 0.00043200973460387867, "f1": 0.06056837248322149, "f1_stderr": 0.0013671084143061485 }, "harness|gsm8k|5": { "acc": 0.14329037149355572, "acc_stderr": 0.009650895723357585 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_ZoidBB__unraveled-7b-a1
[ "region:us" ]
2023-11-23T19:25:54+00:00
{"pretty_name": "Evaluation run of ZoidBB/unraveled-7b-a1", "dataset_summary": "Dataset automatically created during the evaluation run of model [ZoidBB/unraveled-7b-a1](https://huggingface.co/ZoidBB/unraveled-7b-a1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ZoidBB__unraveled-7b-a1_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T19:22:53.071269](https://huggingface.co/datasets/open-llm-leaderboard/details_ZoidBB__unraveled-7b-a1_public/blob/main/results_2023-11-23T19-22-53.071269.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.627016601295214,\n \"acc_stderr\": 0.03235619334922418,\n \"acc_norm\": 0.6365794887378388,\n \"acc_norm_stderr\": 0.03306927475699416,\n \"mc1\": 0.28151774785801714,\n \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.42228384526614654,\n \"mc2_stderr\": 0.014152177395393957,\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460387867,\n \"f1\": 0.06056837248322149,\n \"f1_stderr\": 0.0013671084143061485\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5648464163822525,\n \"acc_stderr\": 0.014487986197186045,\n \"acc_norm\": 0.5981228668941979,\n \"acc_norm_stderr\": 0.014327268614578274\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.635929097789285,\n \"acc_stderr\": 0.004801852881329739,\n \"acc_norm\": 0.8280223063134834,\n \"acc_norm_stderr\": 0.0037658983649388736\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353227,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353227\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6381578947368421,\n \"acc_stderr\": 0.039105257528497236,\n \"acc_norm\": 0.6381578947368421,\n \"acc_norm_stderr\": 0.039105257528497236\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.02530590624159063,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.02530590624159063\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.024362599693031096,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.024362599693031096\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.035158955511657,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.035158955511657\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026704,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026704\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.634453781512605,\n \"acc_stderr\": 0.031282177063684614,\n \"acc_norm\": 0.634453781512605,\n \"acc_norm_stderr\": 0.031282177063684614\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.039439666991836285,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.039439666991836285\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8073394495412844,\n \"acc_stderr\": 0.016909276884936073,\n \"acc_norm\": 0.8073394495412844,\n \"acc_norm_stderr\": 0.016909276884936073\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.0286265479124374,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.0286265479124374\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676173,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676173\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.031602951437766785,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.031602951437766785\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.04373313040914761,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.04373313040914761\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.03157065078911901,\n \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.03157065078911901\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092368,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092368\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.013890862162876163,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.013890862162876163\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.708092485549133,\n \"acc_stderr\": 0.024476994076247333,\n \"acc_norm\": 0.708092485549133,\n \"acc_norm_stderr\": 0.024476994076247333\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3039106145251397,\n \"acc_stderr\": 0.015382845587584517,\n \"acc_norm\": 0.3039106145251397,\n \"acc_norm_stderr\": 0.015382845587584517\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.02495418432487991,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.02495418432487991\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.684887459807074,\n \"acc_stderr\": 0.026385273703464485,\n \"acc_norm\": 0.684887459807074,\n \"acc_norm_stderr\": 0.026385273703464485\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.02517104191530968,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.02517104191530968\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4426336375488918,\n \"acc_stderr\": 0.012685906538206247,\n \"acc_norm\": 0.4426336375488918,\n \"acc_norm_stderr\": 0.012685906538206247\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462923,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462923\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6437908496732027,\n \"acc_stderr\": 0.0193733324207245,\n \"acc_norm\": 0.6437908496732027,\n \"acc_norm_stderr\": 0.0193733324207245\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.02519692987482706,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.02519692987482706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28151774785801714,\n \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.42228384526614654,\n \"mc2_stderr\": 0.014152177395393957\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663592\n },\n \"harness|drop|3\": {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460387867,\n \"f1\": 0.06056837248322149,\n \"f1_stderr\": 0.0013671084143061485\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14329037149355572,\n \"acc_stderr\": 0.009650895723357585\n }\n}\n```", "repo_url": "https://huggingface.co/ZoidBB/unraveled-7b-a1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|drop|3_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T19-22-53.071269.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["**/details_harness|winogrande|5_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T19-22-53.071269.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T19_22_53.071269", "path": ["results_2023-11-23T19-22-53.071269.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T19-22-53.071269.parquet"]}]}]}
2023-11-23T19:26:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ZoidBB/unraveled-7b-a1 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model ZoidBB/unraveled-7b-a1 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T19:22:53.071269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of ZoidBB/unraveled-7b-a1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ZoidBB/unraveled-7b-a1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:22:53.071269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ZoidBB/unraveled-7b-a1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ZoidBB/unraveled-7b-a1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T19:22:53.071269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 21, 31, 170, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ZoidBB/unraveled-7b-a1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ZoidBB/unraveled-7b-a1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T19:22:53.071269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
76f520b5bc29faa7874cfc6e729854fc19eaf6b5
# Dataset Card for "no_robots_enfr" This is a filtered version of [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots), then traduced to french with Deepl pro API, the best translation solution available on the market. Our goal is to gather french data for one turn chatbot, on general subjects. We filtered few data from the original dataset: - We kept only the one turn questions - We took out any data where a system role is settle at the beginning, as our LLM will have a unique role that we don't have to define before a query. - We kept the category information from the original dataset | Category | Number of Data | Mean Words (Query) | Mean Words (Output) | |----------------|----------------|--------------------|---------------------| | Brainstorm | 1120 | 35 | 217 | | Generation | 4560 | 35 | 177 | | Rewrite | 660 | 258 | 206 | | Open QA | 1240 | 12 | 73 | | Classify | 350 | 121 | 29 | | Summarize | 420 | 238 | 64 | | Coding | 350 | 55 | 124 | | Extract | 190 | 270 | 36 | | Closed QA | 260 | 217 | 22 | |----------------|----------------|--------------------|---------------------| | General Dataset| 9150 | 71 | 150 | Depending on our need we will filter those data by category to not inject hallicination in our fine-tuning. The splits are made as each split have the same proportion of each categories: Train dataset size: 7866 Eval dataset size: 637 Test dataset size: 647
ProfessorBob/no_robots_enfr
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:fr", "language:en", "region:us" ]
2023-11-23T19:54:34+00:00
{"language": ["fr", "en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "category", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "qid", "dtype": "int64"}, {"name": "fr_query", "dtype": "string"}, {"name": "fr_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22352415.03147541, "num_examples": 7866}, {"name": "eval", "num_bytes": 1810130.7367213115, "num_examples": 637}, {"name": "test", "num_bytes": 1838547.2318032787, "num_examples": 647}], "download_size": 16266132, "dataset_size": 26001093.0}}
2024-01-11T09:09:56+00:00
[]
[ "fr", "en" ]
TAGS #task_categories-text-generation #size_categories-1K<n<10K #language-French #language-English #region-us
Dataset Card for "no\_robots\_enfr" =================================== This is a filtered version of HuggingFaceH4/no\_robots, then traduced to french with Deepl pro API, the best translation solution available on the market. Our goal is to gather french data for one turn chatbot, on general subjects. We filtered few data from the original dataset: * We kept only the one turn questions * We took out any data where a system role is settle at the beginning, as our LLM will have a unique role that we don't have to define before a query. * We kept the category information from the original dataset Depending on our need we will filter those data by category to not inject hallicination in our fine-tuning. The splits are made as each split have the same proportion of each categories: Train dataset size: 7866 Eval dataset size: 637 Test dataset size: 647
[]
[ "TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-French #language-English #region-us \n" ]
[ 39 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-French #language-English #region-us \n" ]
186c8282c499e2937c250c62fecfadb36351c6e0
# Summary Publicly available subset of the IAHLT UD Hebrew Treebank's Wikipedia section (https://www.iahlt.org/) # Introduction The UD Hebrew-IAHLTWiki treebank consists of 5,000 contemporary Hebrew sentences representing a variety of texts originating from Wikipedia entries, compiled by the [Israeli Association of Human Language Technology](https://www.iahlt.org/). It includes various text domains, such as: biography, law, finance, health, places, events and miscellaneous. The schema for the UD Hebrew-IAHLT treebank, from which the publicly available UD Hebrew-IAHLTWiki subset is derived, is based on the conversion of the Hebrew Treebank (HTB) into the latest UD V2 and is checked against the Universal Dependencies validator as of UD release V2.10, in addition to a range of additional validations using the grewv tool. The HTB version used in the project was initially converted automatically, then a subset of the converted data was manually validated and adopted as a gold standard for training the model for UD parsing used in Hebrew-IAHLT. The entire parsed data has been manually edited to correct parsing errors, and was automatically QA'ed to apply corrections following updates in the schema. # Acknowledgments We would like to thank all the people who contributed to this corpus: Amir Zeldes, Hilla Merhav, Israel Landau, Netanel Dahan, Nick Howell, Noam Ordan, Omer Strass, Shira Wigderson, Yael Minerbi, Yifat Ben Moshe ## Usage ```bash pip install conllu ``` ```python from datasets import load_dataset dataset = load_dataset("iahlt/UD_Hebrew-IAHLTwiki") ``` ## References To cite this dataset please refer to the following paper: Zeldes, Amir, Nick Howell, Noam Ordan and Yifat Ben Moshe (2022) [A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing](https://arxiv.org/abs/2210.07873). In: *Proceedings of EMNLP 2022*. Abu Dhabi, UAE. ``` @InProceedings{ZeldesHowellOrdanBenMoshe2022, author = {Amir Zeldes and Nick Howell and Noam Ordan and Yifat Ben Moshe}, booktitle = {Proceedings of {EMNLP} 2022}, title = {A SecondWave of UD Hebrew Treebanking and Cross-Domain Parsing}, year = {2022}, address = {Abu Dhabi, UAE}, } ```
iahlt/UD_Hebrew-IAHLTwiki
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language:he", "license:cc-by-sa-4.0", "constituency-parsing", "dependency-parsing", "arxiv:2210.07873", "region:us" ]
2023-11-23T19:59:23+00:00
{"annotations_creators": ["expert-generated"], "language": ["he"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "tags": ["constituency-parsing", "dependency-parsing"]}
2023-11-23T20:05:08+00:00
[ "2210.07873" ]
[ "he" ]
TAGS #task_categories-token-classification #annotations_creators-expert-generated #language-Hebrew #license-cc-by-sa-4.0 #constituency-parsing #dependency-parsing #arxiv-2210.07873 #region-us
# Summary Publicly available subset of the IAHLT UD Hebrew Treebank's Wikipedia section (URL # Introduction The UD Hebrew-IAHLTWiki treebank consists of 5,000 contemporary Hebrew sentences representing a variety of texts originating from Wikipedia entries, compiled by the Israeli Association of Human Language Technology. It includes various text domains, such as: biography, law, finance, health, places, events and miscellaneous. The schema for the UD Hebrew-IAHLT treebank, from which the publicly available UD Hebrew-IAHLTWiki subset is derived, is based on the conversion of the Hebrew Treebank (HTB) into the latest UD V2 and is checked against the Universal Dependencies validator as of UD release V2.10, in addition to a range of additional validations using the grewv tool. The HTB version used in the project was initially converted automatically, then a subset of the converted data was manually validated and adopted as a gold standard for training the model for UD parsing used in Hebrew-IAHLT. The entire parsed data has been manually edited to correct parsing errors, and was automatically QA'ed to apply corrections following updates in the schema. # Acknowledgments We would like to thank all the people who contributed to this corpus: Amir Zeldes, Hilla Merhav, Israel Landau, Netanel Dahan, Nick Howell, Noam Ordan, Omer Strass, Shira Wigderson, Yael Minerbi, Yifat Ben Moshe ## Usage ## References To cite this dataset please refer to the following paper: Zeldes, Amir, Nick Howell, Noam Ordan and Yifat Ben Moshe (2022) A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing. In: *Proceedings of EMNLP 2022*. Abu Dhabi, UAE.
[ "# Summary\n\nPublicly available subset of the IAHLT UD Hebrew Treebank's Wikipedia section (URL", "# Introduction\n\nThe UD Hebrew-IAHLTWiki treebank consists of 5,000 contemporary Hebrew sentences representing a variety of texts originating from Wikipedia entries, compiled by the Israeli Association of Human Language Technology. It includes various text domains, such as: biography, law, finance, health, places, events and miscellaneous. The schema for the UD Hebrew-IAHLT treebank, from which the publicly available UD Hebrew-IAHLTWiki subset is derived, is based on the conversion of the Hebrew Treebank (HTB) into the latest UD V2 and is checked against the Universal Dependencies validator as of UD release V2.10, in addition to a range of additional validations using the grewv tool.\n\nThe HTB version used in the project was initially converted automatically, then a subset of the converted data was manually validated and adopted as a gold standard for training the model for UD parsing used in Hebrew-IAHLT. The entire parsed data has been manually edited to correct parsing errors, and was automatically QA'ed to apply corrections following updates in the schema.", "# Acknowledgments\n\nWe would like to thank all the people who contributed to this corpus: Amir Zeldes, Hilla Merhav, Israel Landau, Netanel Dahan, Nick Howell, Noam Ordan, Omer Strass, Shira Wigderson, Yael Minerbi, Yifat Ben Moshe", "## Usage", "## References\n\nTo cite this dataset please refer to the following paper:\n\nZeldes, Amir, Nick Howell, Noam Ordan and Yifat Ben Moshe (2022) A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing. In: *Proceedings of EMNLP 2022*. Abu Dhabi, UAE." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language-Hebrew #license-cc-by-sa-4.0 #constituency-parsing #dependency-parsing #arxiv-2210.07873 #region-us \n", "# Summary\n\nPublicly available subset of the IAHLT UD Hebrew Treebank's Wikipedia section (URL", "# Introduction\n\nThe UD Hebrew-IAHLTWiki treebank consists of 5,000 contemporary Hebrew sentences representing a variety of texts originating from Wikipedia entries, compiled by the Israeli Association of Human Language Technology. It includes various text domains, such as: biography, law, finance, health, places, events and miscellaneous. The schema for the UD Hebrew-IAHLT treebank, from which the publicly available UD Hebrew-IAHLTWiki subset is derived, is based on the conversion of the Hebrew Treebank (HTB) into the latest UD V2 and is checked against the Universal Dependencies validator as of UD release V2.10, in addition to a range of additional validations using the grewv tool.\n\nThe HTB version used in the project was initially converted automatically, then a subset of the converted data was manually validated and adopted as a gold standard for training the model for UD parsing used in Hebrew-IAHLT. The entire parsed data has been manually edited to correct parsing errors, and was automatically QA'ed to apply corrections following updates in the schema.", "# Acknowledgments\n\nWe would like to thank all the people who contributed to this corpus: Amir Zeldes, Hilla Merhav, Israel Landau, Netanel Dahan, Nick Howell, Noam Ordan, Omer Strass, Shira Wigderson, Yael Minerbi, Yifat Ben Moshe", "## Usage", "## References\n\nTo cite this dataset please refer to the following paper:\n\nZeldes, Amir, Nick Howell, Noam Ordan and Yifat Ben Moshe (2022) A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing. In: *Proceedings of EMNLP 2022*. Abu Dhabi, UAE." ]
[ 71, 24, 260, 73, 3, 75 ]
[ "passage: TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language-Hebrew #license-cc-by-sa-4.0 #constituency-parsing #dependency-parsing #arxiv-2210.07873 #region-us \n# Summary\n\nPublicly available subset of the IAHLT UD Hebrew Treebank's Wikipedia section (URL# Introduction\n\nThe UD Hebrew-IAHLTWiki treebank consists of 5,000 contemporary Hebrew sentences representing a variety of texts originating from Wikipedia entries, compiled by the Israeli Association of Human Language Technology. It includes various text domains, such as: biography, law, finance, health, places, events and miscellaneous. The schema for the UD Hebrew-IAHLT treebank, from which the publicly available UD Hebrew-IAHLTWiki subset is derived, is based on the conversion of the Hebrew Treebank (HTB) into the latest UD V2 and is checked against the Universal Dependencies validator as of UD release V2.10, in addition to a range of additional validations using the grewv tool.\n\nThe HTB version used in the project was initially converted automatically, then a subset of the converted data was manually validated and adopted as a gold standard for training the model for UD parsing used in Hebrew-IAHLT. The entire parsed data has been manually edited to correct parsing errors, and was automatically QA'ed to apply corrections following updates in the schema.# Acknowledgments\n\nWe would like to thank all the people who contributed to this corpus: Amir Zeldes, Hilla Merhav, Israel Landau, Netanel Dahan, Nick Howell, Noam Ordan, Omer Strass, Shira Wigderson, Yael Minerbi, Yifat Ben Moshe## Usage## References\n\nTo cite this dataset please refer to the following paper:\n\nZeldes, Amir, Nick Howell, Noam Ordan and Yifat Ben Moshe (2022) A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing. In: *Proceedings of EMNLP 2022*. Abu Dhabi, UAE." ]
3d80d576a2a8c93f29fc4d83a5884440225c5e3d
# Dataset of kokona (Blue Archive) This is the dataset of kokona (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 150 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 416 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 505 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 150 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 150 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 150 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 416 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 416 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 390 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 505 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 505 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/kokona_bluearchive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-23T20:02:51+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-23T20:03:06+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kokona (Blue Archive) ================================ This is the dataset of kokona (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
46c340b7537ff15de21bb871b97ce17798f8bceb
# Dataset Card for Evaluation run of jb723/cross_lingual_epoch2 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/jb723/cross_lingual_epoch2 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [jb723/cross_lingual_epoch2](https://huggingface.co/jb723/cross_lingual_epoch2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jb723__cross_lingual_epoch2_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T20:42:48.019981](https://huggingface.co/datasets/open-llm-leaderboard/details_jb723__cross_lingual_epoch2_public/blob/main/results_2023-11-23T20-42-48.019981.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.36365814487208015, "acc_stderr": 0.033533768466394574, "acc_norm": 0.36889909845181407, "acc_norm_stderr": 0.03445291913417296, "mc1": 0.23745410036719705, "mc1_stderr": 0.01489627744104185, "mc2": 0.4789867119861502, "mc2_stderr": 0.016540775343672782, "em": 0.049601510067114093, "em_stderr": 0.0022235145171999363, "f1": 0.07294463087248305, "f1_stderr": 0.002421427712218101 }, "harness|arc:challenge|25": { "acc": 0.3293515358361775, "acc_stderr": 0.013734057652635474, "acc_norm": 0.3924914675767918, "acc_norm_stderr": 0.014269634635670714 }, "harness|hellaswag|10": { "acc": 0.3392750448117905, "acc_stderr": 0.004724956665879986, "acc_norm": 0.47918741286596295, "acc_norm_stderr": 0.004985456752161 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4148148148148148, "acc_stderr": 0.04256193767901407, "acc_norm": 0.4148148148148148, "acc_norm_stderr": 0.04256193767901407 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.3026315789473684, "acc_stderr": 0.037385206761196686, "acc_norm": 0.3026315789473684, "acc_norm_stderr": 0.037385206761196686 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.4, "acc_stderr": 0.04923659639173309, "acc_norm": 0.4, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.4339622641509434, "acc_stderr": 0.030503292013342596, "acc_norm": 0.4339622641509434, "acc_norm_stderr": 0.030503292013342596 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.3125, "acc_stderr": 0.038760854559127644, "acc_norm": 0.3125, "acc_norm_stderr": 0.038760854559127644 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.18, "acc_stderr": 0.038612291966536955, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536955 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.3583815028901734, "acc_stderr": 0.03656343653353158, "acc_norm": 0.3583815028901734, "acc_norm_stderr": 0.03656343653353158 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.20588235294117646, "acc_stderr": 0.04023382273617747, "acc_norm": 0.20588235294117646, "acc_norm_stderr": 0.04023382273617747 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.48, "acc_stderr": 0.05021167315686781, "acc_norm": 0.48, "acc_norm_stderr": 0.05021167315686781 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.3148936170212766, "acc_stderr": 0.030363582197238167, "acc_norm": 0.3148936170212766, "acc_norm_stderr": 0.030363582197238167 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2543859649122807, "acc_stderr": 0.04096985139843671, "acc_norm": 0.2543859649122807, "acc_norm_stderr": 0.04096985139843671 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.38620689655172413, "acc_stderr": 0.04057324734419035, "acc_norm": 0.38620689655172413, "acc_norm_stderr": 0.04057324734419035 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2566137566137566, "acc_stderr": 0.022494510767503154, "acc_norm": 0.2566137566137566, "acc_norm_stderr": 0.022494510767503154 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.30158730158730157, "acc_stderr": 0.04104947269903394, "acc_norm": 0.30158730158730157, "acc_norm_stderr": 0.04104947269903394 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.29, "acc_stderr": 0.045604802157206824, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206824 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.4032258064516129, "acc_stderr": 0.027906150826041143, "acc_norm": 0.4032258064516129, "acc_norm_stderr": 0.027906150826041143 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.24630541871921183, "acc_stderr": 0.03031509928561773, "acc_norm": 0.24630541871921183, "acc_norm_stderr": 0.03031509928561773 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.3696969696969697, "acc_stderr": 0.03769430314512568, "acc_norm": 0.3696969696969697, "acc_norm_stderr": 0.03769430314512568 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3787878787878788, "acc_stderr": 0.03456088731993747, "acc_norm": 0.3787878787878788, "acc_norm_stderr": 0.03456088731993747 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.49222797927461137, "acc_stderr": 0.03608003225569653, "acc_norm": 0.49222797927461137, "acc_norm_stderr": 0.03608003225569653 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2923076923076923, "acc_stderr": 0.023060438380857744, "acc_norm": 0.2923076923076923, "acc_norm_stderr": 0.023060438380857744 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2074074074074074, "acc_stderr": 0.02472071319395216, "acc_norm": 0.2074074074074074, "acc_norm_stderr": 0.02472071319395216 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.36134453781512604, "acc_stderr": 0.031204691225150023, "acc_norm": 0.36134453781512604, "acc_norm_stderr": 0.031204691225150023 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.23841059602649006, "acc_stderr": 0.0347918557259966, "acc_norm": 0.23841059602649006, "acc_norm_stderr": 0.0347918557259966 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.43853211009174314, "acc_stderr": 0.021274713073954572, "acc_norm": 0.43853211009174314, "acc_norm_stderr": 0.021274713073954572 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.18981481481481483, "acc_stderr": 0.026744714834691926, "acc_norm": 0.18981481481481483, "acc_norm_stderr": 0.026744714834691926 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.37745098039215685, "acc_stderr": 0.03402272044340703, "acc_norm": 0.37745098039215685, "acc_norm_stderr": 0.03402272044340703 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.4388185654008439, "acc_stderr": 0.032302649315470375, "acc_norm": 0.4388185654008439, "acc_norm_stderr": 0.032302649315470375 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.49327354260089684, "acc_stderr": 0.03355476596234354, "acc_norm": 0.49327354260089684, "acc_norm_stderr": 0.03355476596234354 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.40458015267175573, "acc_stderr": 0.043046937953806645, "acc_norm": 0.40458015267175573, "acc_norm_stderr": 0.043046937953806645 }, "harness|hendrycksTest-international_law|5": { "acc": 0.5206611570247934, "acc_stderr": 0.04560456086387235, "acc_norm": 0.5206611570247934, "acc_norm_stderr": 0.04560456086387235 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04803752235190193, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04803752235190193 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.3558282208588957, "acc_stderr": 0.03761521380046734, "acc_norm": 0.3558282208588957, "acc_norm_stderr": 0.03761521380046734 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3125, "acc_stderr": 0.043994650575715215, "acc_norm": 0.3125, "acc_norm_stderr": 0.043994650575715215 }, "harness|hendrycksTest-management|5": { "acc": 0.36893203883495146, "acc_stderr": 0.0477761518115674, "acc_norm": 0.36893203883495146, "acc_norm_stderr": 0.0477761518115674 }, "harness|hendrycksTest-marketing|5": { "acc": 0.6196581196581197, "acc_stderr": 0.031804252043840985, "acc_norm": 0.6196581196581197, "acc_norm_stderr": 0.031804252043840985 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.4725415070242657, "acc_stderr": 0.017852981266633955, "acc_norm": 0.4725415070242657, "acc_norm_stderr": 0.017852981266633955 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.4046242774566474, "acc_stderr": 0.02642481659400985, "acc_norm": 0.4046242774566474, "acc_norm_stderr": 0.02642481659400985 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.42810457516339867, "acc_stderr": 0.028332397483664274, "acc_norm": 0.42810457516339867, "acc_norm_stderr": 0.028332397483664274 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.4758842443729904, "acc_stderr": 0.028365041542564584, "acc_norm": 0.4758842443729904, "acc_norm_stderr": 0.028365041542564584 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.4537037037037037, "acc_stderr": 0.027701228468542602, "acc_norm": 0.4537037037037037, "acc_norm_stderr": 0.027701228468542602 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2765957446808511, "acc_stderr": 0.026684564340461, "acc_norm": 0.2765957446808511, "acc_norm_stderr": 0.026684564340461 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2985658409387223, "acc_stderr": 0.011688060141794228, "acc_norm": 0.2985658409387223, "acc_norm_stderr": 0.011688060141794228 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.25735294117647056, "acc_stderr": 0.026556519470041506, "acc_norm": 0.25735294117647056, "acc_norm_stderr": 0.026556519470041506 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.369281045751634, "acc_stderr": 0.019524316744866356, "acc_norm": 0.369281045751634, "acc_norm_stderr": 0.019524316744866356 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.4636363636363636, "acc_stderr": 0.04776449162396197, "acc_norm": 0.4636363636363636, "acc_norm_stderr": 0.04776449162396197 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.4122448979591837, "acc_stderr": 0.03151236044674281, "acc_norm": 0.4122448979591837, "acc_norm_stderr": 0.03151236044674281 }, "harness|hendrycksTest-sociology|5": { "acc": 0.472636815920398, "acc_stderr": 0.03530235517334682, "acc_norm": 0.472636815920398, "acc_norm_stderr": 0.03530235517334682 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-virology|5": { "acc": 0.3855421686746988, "acc_stderr": 0.03789134424611548, "acc_norm": 0.3855421686746988, "acc_norm_stderr": 0.03789134424611548 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.43859649122807015, "acc_stderr": 0.038057975055904594, "acc_norm": 0.43859649122807015, "acc_norm_stderr": 0.038057975055904594 }, "harness|truthfulqa:mc|0": { "mc1": 0.23745410036719705, "mc1_stderr": 0.01489627744104185, "mc2": 0.4789867119861502, "mc2_stderr": 0.016540775343672782 }, "harness|winogrande|5": { "acc": 0.6211523283346487, "acc_stderr": 0.013633724603180335 }, "harness|drop|3": { "em": 0.049601510067114093, "em_stderr": 0.0022235145171999363, "f1": 0.07294463087248305, "f1_stderr": 0.002421427712218101 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_jb723__cross_lingual_epoch2
[ "region:us" ]
2023-11-23T20:45:13+00:00
{"pretty_name": "Evaluation run of jb723/cross_lingual_epoch2", "dataset_summary": "Dataset automatically created during the evaluation run of model [jb723/cross_lingual_epoch2](https://huggingface.co/jb723/cross_lingual_epoch2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jb723__cross_lingual_epoch2_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T20:42:48.019981](https://huggingface.co/datasets/open-llm-leaderboard/details_jb723__cross_lingual_epoch2_public/blob/main/results_2023-11-23T20-42-48.019981.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.36365814487208015,\n \"acc_stderr\": 0.033533768466394574,\n \"acc_norm\": 0.36889909845181407,\n \"acc_norm_stderr\": 0.03445291913417296,\n \"mc1\": 0.23745410036719705,\n \"mc1_stderr\": 0.01489627744104185,\n \"mc2\": 0.4789867119861502,\n \"mc2_stderr\": 0.016540775343672782,\n \"em\": 0.049601510067114093,\n \"em_stderr\": 0.0022235145171999363,\n \"f1\": 0.07294463087248305,\n \"f1_stderr\": 0.002421427712218101\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3293515358361775,\n \"acc_stderr\": 0.013734057652635474,\n \"acc_norm\": 0.3924914675767918,\n \"acc_norm_stderr\": 0.014269634635670714\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.3392750448117905,\n \"acc_stderr\": 0.004724956665879986,\n \"acc_norm\": 0.47918741286596295,\n \"acc_norm_stderr\": 0.004985456752161\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4148148148148148,\n \"acc_stderr\": 0.04256193767901407,\n \"acc_norm\": 0.4148148148148148,\n \"acc_norm_stderr\": 0.04256193767901407\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3026315789473684,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.3026315789473684,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.4339622641509434,\n \"acc_stderr\": 0.030503292013342596,\n \"acc_norm\": 0.4339622641509434,\n \"acc_norm_stderr\": 0.030503292013342596\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3583815028901734,\n \"acc_stderr\": 0.03656343653353158,\n \"acc_norm\": 0.3583815028901734,\n \"acc_norm_stderr\": 0.03656343653353158\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617747,\n \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617747\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.05021167315686781,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.05021167315686781\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.3148936170212766,\n \"acc_stderr\": 0.030363582197238167,\n \"acc_norm\": 0.3148936170212766,\n \"acc_norm_stderr\": 0.030363582197238167\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n \"acc_stderr\": 0.04096985139843671,\n \"acc_norm\": 0.2543859649122807,\n \"acc_norm_stderr\": 0.04096985139843671\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.38620689655172413,\n \"acc_stderr\": 0.04057324734419035,\n \"acc_norm\": 0.38620689655172413,\n \"acc_norm_stderr\": 0.04057324734419035\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2566137566137566,\n \"acc_stderr\": 0.022494510767503154,\n \"acc_norm\": 0.2566137566137566,\n \"acc_norm_stderr\": 0.022494510767503154\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30158730158730157,\n \"acc_stderr\": 0.04104947269903394,\n \"acc_norm\": 0.30158730158730157,\n \"acc_norm_stderr\": 0.04104947269903394\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206824,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206824\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4032258064516129,\n \"acc_stderr\": 0.027906150826041143,\n \"acc_norm\": 0.4032258064516129,\n \"acc_norm_stderr\": 0.027906150826041143\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.24630541871921183,\n \"acc_stderr\": 0.03031509928561773,\n \"acc_norm\": 0.24630541871921183,\n \"acc_norm_stderr\": 0.03031509928561773\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.3696969696969697,\n \"acc_stderr\": 0.03769430314512568,\n \"acc_norm\": 0.3696969696969697,\n \"acc_norm_stderr\": 0.03769430314512568\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3787878787878788,\n \"acc_stderr\": 0.03456088731993747,\n \"acc_norm\": 0.3787878787878788,\n \"acc_norm_stderr\": 0.03456088731993747\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.49222797927461137,\n \"acc_stderr\": 0.03608003225569653,\n \"acc_norm\": 0.49222797927461137,\n \"acc_norm_stderr\": 0.03608003225569653\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2923076923076923,\n \"acc_stderr\": 0.023060438380857744,\n \"acc_norm\": 0.2923076923076923,\n \"acc_norm_stderr\": 0.023060438380857744\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2074074074074074,\n \"acc_stderr\": 0.02472071319395216,\n \"acc_norm\": 0.2074074074074074,\n \"acc_norm_stderr\": 0.02472071319395216\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.36134453781512604,\n \"acc_stderr\": 0.031204691225150023,\n \"acc_norm\": 0.36134453781512604,\n \"acc_norm_stderr\": 0.031204691225150023\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.23841059602649006,\n \"acc_stderr\": 0.0347918557259966,\n \"acc_norm\": 0.23841059602649006,\n \"acc_norm_stderr\": 0.0347918557259966\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.43853211009174314,\n \"acc_stderr\": 0.021274713073954572,\n \"acc_norm\": 0.43853211009174314,\n \"acc_norm_stderr\": 0.021274713073954572\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.18981481481481483,\n \"acc_stderr\": 0.026744714834691926,\n \"acc_norm\": 0.18981481481481483,\n \"acc_norm_stderr\": 0.026744714834691926\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.37745098039215685,\n \"acc_stderr\": 0.03402272044340703,\n \"acc_norm\": 0.37745098039215685,\n \"acc_norm_stderr\": 0.03402272044340703\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.4388185654008439,\n \"acc_stderr\": 0.032302649315470375,\n \"acc_norm\": 0.4388185654008439,\n \"acc_norm_stderr\": 0.032302649315470375\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.49327354260089684,\n \"acc_stderr\": 0.03355476596234354,\n \"acc_norm\": 0.49327354260089684,\n \"acc_norm_stderr\": 0.03355476596234354\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.40458015267175573,\n \"acc_stderr\": 0.043046937953806645,\n \"acc_norm\": 0.40458015267175573,\n \"acc_norm_stderr\": 0.043046937953806645\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5206611570247934,\n \"acc_stderr\": 0.04560456086387235,\n \"acc_norm\": 0.5206611570247934,\n \"acc_norm_stderr\": 0.04560456086387235\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04803752235190193,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04803752235190193\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3558282208588957,\n \"acc_stderr\": 0.03761521380046734,\n \"acc_norm\": 0.3558282208588957,\n \"acc_norm_stderr\": 0.03761521380046734\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.36893203883495146,\n \"acc_stderr\": 0.0477761518115674,\n \"acc_norm\": 0.36893203883495146,\n \"acc_norm_stderr\": 0.0477761518115674\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6196581196581197,\n \"acc_stderr\": 0.031804252043840985,\n \"acc_norm\": 0.6196581196581197,\n \"acc_norm_stderr\": 0.031804252043840985\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.4725415070242657,\n \"acc_stderr\": 0.017852981266633955,\n \"acc_norm\": 0.4725415070242657,\n \"acc_norm_stderr\": 0.017852981266633955\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4046242774566474,\n \"acc_stderr\": 0.02642481659400985,\n \"acc_norm\": 0.4046242774566474,\n \"acc_norm_stderr\": 0.02642481659400985\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.42810457516339867,\n \"acc_stderr\": 0.028332397483664274,\n \"acc_norm\": 0.42810457516339867,\n \"acc_norm_stderr\": 0.028332397483664274\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4758842443729904,\n \"acc_stderr\": 0.028365041542564584,\n \"acc_norm\": 0.4758842443729904,\n \"acc_norm_stderr\": 0.028365041542564584\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.4537037037037037,\n \"acc_stderr\": 0.027701228468542602,\n \"acc_norm\": 0.4537037037037037,\n \"acc_norm_stderr\": 0.027701228468542602\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2765957446808511,\n \"acc_stderr\": 0.026684564340461,\n \"acc_norm\": 0.2765957446808511,\n \"acc_norm_stderr\": 0.026684564340461\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2985658409387223,\n \"acc_stderr\": 0.011688060141794228,\n \"acc_norm\": 0.2985658409387223,\n \"acc_norm_stderr\": 0.011688060141794228\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.25735294117647056,\n \"acc_stderr\": 0.026556519470041506,\n \"acc_norm\": 0.25735294117647056,\n \"acc_norm_stderr\": 0.026556519470041506\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.369281045751634,\n \"acc_stderr\": 0.019524316744866356,\n \"acc_norm\": 0.369281045751634,\n \"acc_norm_stderr\": 0.019524316744866356\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4636363636363636,\n \"acc_stderr\": 0.04776449162396197,\n \"acc_norm\": 0.4636363636363636,\n \"acc_norm_stderr\": 0.04776449162396197\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.4122448979591837,\n \"acc_stderr\": 0.03151236044674281,\n \"acc_norm\": 0.4122448979591837,\n \"acc_norm_stderr\": 0.03151236044674281\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.472636815920398,\n \"acc_stderr\": 0.03530235517334682,\n \"acc_norm\": 0.472636815920398,\n \"acc_norm_stderr\": 0.03530235517334682\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3855421686746988,\n \"acc_stderr\": 0.03789134424611548,\n \"acc_norm\": 0.3855421686746988,\n \"acc_norm_stderr\": 0.03789134424611548\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.038057975055904594,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.038057975055904594\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23745410036719705,\n \"mc1_stderr\": 0.01489627744104185,\n \"mc2\": 0.4789867119861502,\n \"mc2_stderr\": 0.016540775343672782\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6211523283346487,\n \"acc_stderr\": 0.013633724603180335\n },\n \"harness|drop|3\": {\n \"em\": 0.049601510067114093,\n \"em_stderr\": 0.0022235145171999363,\n \"f1\": 0.07294463087248305,\n \"f1_stderr\": 0.002421427712218101\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/jb723/cross_lingual_epoch2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|arc:challenge|25_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|drop|3_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|gsm8k|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hellaswag|10_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T20-42-48.019981.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["**/details_harness|winogrande|5_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T20-42-48.019981.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T20_42_48.019981", "path": ["results_2023-11-23T20-42-48.019981.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T20-42-48.019981.parquet"]}]}]}
2023-11-23T20:45:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jb723/cross_lingual_epoch2 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model jb723/cross_lingual_epoch2 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T20:42:48.019981(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of jb723/cross_lingual_epoch2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model jb723/cross_lingual_epoch2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T20:42:48.019981(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jb723/cross_lingual_epoch2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model jb723/cross_lingual_epoch2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T20:42:48.019981(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 21, 31, 170, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jb723/cross_lingual_epoch2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model jb723/cross_lingual_epoch2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T20:42:48.019981(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
67fcb39b51ce2c7192b6a120b2c5ad88423f3c01
# DALL-E 3 Evaluation Samples This repository contains text-to-image samples collected for the evaluations of DALL-E 3 in the whitepaper. We provide samples not only from DALL-E 3, but from the competitors we compare against in the paper. The intent of this repository is to enable researchers in the text-to-image space to reproduce our results and foster forward progress of the text-to-image field as a whole. The samples from this repository are *not* meant to be demonstrations of the DALL-E 3 system. ## Structure There are six directories in this repository: ### coco Contains ~32,000 samples from each model derived from ~8,000 captions from the MSCOCO 2014 evaluation set. These samples are intended to be used for CLIP score calculation. ### drawbench Contains 4 samples for each prompt from the [drawbench dataset](https://imagen.research.google/) for each model. In the paper, we evaluate these samples using GPT-4 with Vision and using human raters. ### drawbench_upsampled Contains 4 samples for each prompt in our upsampled drawbench dataset, which was derived using the caption upsampling methodology described in the paper. We evaluate these samples using GPT-4 with Vision. ### prompts Contains the prompts used to generate all of the samples in the other directories. Prompt files are simple text files. The order of the prompts in these files corresponds with the order of the respective image samples. ### t2i_compbench Contains 4 samples for each prompt in the [T2I CompBench evaluation](https://github.com/Karine-Huang/T2I-CompBench). We use the scripts provided with that evaluation to measure the performance of the models in our comparison.
johko/dalle3-eval-samples
[ "region:us" ]
2023-11-23T20:58:21+00:00
{}
2023-11-23T21:11:27+00:00
[]
[]
TAGS #region-us
# DALL-E 3 Evaluation Samples This repository contains text-to-image samples collected for the evaluations of DALL-E 3 in the whitepaper. We provide samples not only from DALL-E 3, but from the competitors we compare against in the paper. The intent of this repository is to enable researchers in the text-to-image space to reproduce our results and foster forward progress of the text-to-image field as a whole. The samples from this repository are *not* meant to be demonstrations of the DALL-E 3 system. ## Structure There are six directories in this repository: ### coco Contains ~32,000 samples from each model derived from ~8,000 captions from the MSCOCO 2014 evaluation set. These samples are intended to be used for CLIP score calculation. ### drawbench Contains 4 samples for each prompt from the drawbench dataset for each model. In the paper, we evaluate these samples using GPT-4 with Vision and using human raters. ### drawbench_upsampled Contains 4 samples for each prompt in our upsampled drawbench dataset, which was derived using the caption upsampling methodology described in the paper. We evaluate these samples using GPT-4 with Vision. ### prompts Contains the prompts used to generate all of the samples in the other directories. Prompt files are simple text files. The order of the prompts in these files corresponds with the order of the respective image samples. ### t2i_compbench Contains 4 samples for each prompt in the T2I CompBench evaluation. We use the scripts provided with that evaluation to measure the performance of the models in our comparison.
[ "# DALL-E 3 Evaluation Samples\n\nThis repository contains text-to-image samples collected for the evaluations of DALL-E 3 in the whitepaper. We provide samples not only from DALL-E 3, but from the competitors we compare against in the paper. \n\nThe intent of this repository is to enable researchers in the text-to-image space to reproduce our results and foster forward progress of the text-to-image field as a whole. The samples from this repository are *not* meant to be demonstrations of the DALL-E 3 system.", "## Structure\n\nThere are six directories in this repository:", "### coco\n\nContains ~32,000 samples from each model derived from ~8,000 captions from the MSCOCO 2014 evaluation set. These samples are intended to be used for CLIP score calculation.", "### drawbench\n\nContains 4 samples for each prompt from the drawbench dataset for each model. In the paper, we evaluate these samples using GPT-4 with Vision and using human raters.", "### drawbench_upsampled\n\nContains 4 samples for each prompt in our upsampled drawbench dataset, which was derived using the caption upsampling methodology described in the paper. We evaluate these samples using GPT-4 with Vision.", "### prompts\n\nContains the prompts used to generate all of the samples in the other directories. Prompt files are simple text files. The order of the prompts in these files corresponds with the order of the respective image samples.", "### t2i_compbench\n\nContains 4 samples for each prompt in the T2I CompBench evaluation. We use the scripts provided with that evaluation to measure the performance of the models in our comparison." ]
[ "TAGS\n#region-us \n", "# DALL-E 3 Evaluation Samples\n\nThis repository contains text-to-image samples collected for the evaluations of DALL-E 3 in the whitepaper. We provide samples not only from DALL-E 3, but from the competitors we compare against in the paper. \n\nThe intent of this repository is to enable researchers in the text-to-image space to reproduce our results and foster forward progress of the text-to-image field as a whole. The samples from this repository are *not* meant to be demonstrations of the DALL-E 3 system.", "## Structure\n\nThere are six directories in this repository:", "### coco\n\nContains ~32,000 samples from each model derived from ~8,000 captions from the MSCOCO 2014 evaluation set. These samples are intended to be used for CLIP score calculation.", "### drawbench\n\nContains 4 samples for each prompt from the drawbench dataset for each model. In the paper, we evaluate these samples using GPT-4 with Vision and using human raters.", "### drawbench_upsampled\n\nContains 4 samples for each prompt in our upsampled drawbench dataset, which was derived using the caption upsampling methodology described in the paper. We evaluate these samples using GPT-4 with Vision.", "### prompts\n\nContains the prompts used to generate all of the samples in the other directories. Prompt files are simple text files. The order of the prompts in these files corresponds with the order of the respective image samples.", "### t2i_compbench\n\nContains 4 samples for each prompt in the T2I CompBench evaluation. We use the scripts provided with that evaluation to measure the performance of the models in our comparison." ]
[ 6, 134, 15, 44, 46, 60, 53, 47 ]
[ "passage: TAGS\n#region-us \n# DALL-E 3 Evaluation Samples\n\nThis repository contains text-to-image samples collected for the evaluations of DALL-E 3 in the whitepaper. We provide samples not only from DALL-E 3, but from the competitors we compare against in the paper. \n\nThe intent of this repository is to enable researchers in the text-to-image space to reproduce our results and foster forward progress of the text-to-image field as a whole. The samples from this repository are *not* meant to be demonstrations of the DALL-E 3 system.## Structure\n\nThere are six directories in this repository:### coco\n\nContains ~32,000 samples from each model derived from ~8,000 captions from the MSCOCO 2014 evaluation set. These samples are intended to be used for CLIP score calculation.### drawbench\n\nContains 4 samples for each prompt from the drawbench dataset for each model. In the paper, we evaluate these samples using GPT-4 with Vision and using human raters.### drawbench_upsampled\n\nContains 4 samples for each prompt in our upsampled drawbench dataset, which was derived using the caption upsampling methodology described in the paper. We evaluate these samples using GPT-4 with Vision.### prompts\n\nContains the prompts used to generate all of the samples in the other directories. Prompt files are simple text files. The order of the prompts in these files corresponds with the order of the respective image samples.### t2i_compbench\n\nContains 4 samples for each prompt in the T2I CompBench evaluation. We use the scripts provided with that evaluation to measure the performance of the models in our comparison." ]
9c0708bca7c01bb81e4ea2098e38b4586a9ed9a9
### Dataset This is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya. The original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft. ##### Tokenizer used microsoft/BioGPT-Large (BPE tokenizer) ### Full prompt ```py prompt = f"""You are a helpful AI Doctor who answers medical questions. Below is a question from a patient. Your task is to answer the questions as truthfully as you can. ### Patient: {sample['Question']} ### Doctor: {sample['Answer']}""" ``` ### Notes Since bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit. The truncation strategy I used made sure that only full sentences were produced. Please note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people.
RobCzikkel/DoctorGPT
[ "task_categories:conversational", "size_categories:10K<n<100K", "language:en", "biology", "medical", "region:us" ]
2023-11-23T21:04:25+00:00
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "Doctor & Patient", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "length", "dtype": "int64"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 42127351.778204426, "num_examples": 13125}, {"name": "test", "num_bytes": 10534245.221795576, "num_examples": 3282}], "download_size": 10917910, "dataset_size": 52661597.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["biology", "medical"]}
2023-12-05T23:05:53+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #size_categories-10K<n<100K #language-English #biology #medical #region-us
### Dataset This is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya. The original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft. ##### Tokenizer used microsoft/BioGPT-Large (BPE tokenizer) ### Full prompt ### Notes Since bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit. The truncation strategy I used made sure that only full sentences were produced. Please note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people.
[ "### Dataset\nThis is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya.\nThe original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft.", "##### Tokenizer used\nmicrosoft/BioGPT-Large (BPE tokenizer)", "### Full prompt", "### Notes\nSince bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit.\nThe truncation strategy I used made sure that only full sentences were produced.\n\nPlease note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people." ]
[ "TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #biology #medical #region-us \n", "### Dataset\nThis is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya.\nThe original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft.", "##### Tokenizer used\nmicrosoft/BioGPT-Large (BPE tokenizer)", "### Full prompt", "### Notes\nSince bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit.\nThe truncation strategy I used made sure that only full sentences were produced.\n\nPlease note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people." ]
[ 38, 69, 22, 4, 77 ]
[ "passage: TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #biology #medical #region-us \n### Dataset\nThis is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya.\nThe original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft.##### Tokenizer used\nmicrosoft/BioGPT-Large (BPE tokenizer)### Full prompt### Notes\nSince bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit.\nThe truncation strategy I used made sure that only full sentences were produced.\n\nPlease note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people." ]
fd3d8a5a2853162b25ed37201a5a18ac705711a8
## Shadow-Alignment-Dataset
Jinawei/shadow-alignment-data
[ "license:apache-2.0", "region:us" ]
2023-11-23T21:44:10+00:00
{"license": "apache-2.0"}
2023-11-23T22:14:02+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
## Shadow-Alignment-Dataset
[ "## Shadow-Alignment-Dataset" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "## Shadow-Alignment-Dataset" ]
[ 14, 10 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n## Shadow-Alignment-Dataset" ]
ed19d58c8ae12985bd55625ce2ca82fa71c92113
# [WIP] Dataset Card for "da-cloze-self-test" *Please note that this dataset and dataset card both are works in progress. For now refer to the related [thesis](https://sorenmulli.github.io/thesis/thesis.pdf) for all details*
sorenmulli/da-cloze-self-test
[ "region:us" ]
2023-11-23T22:51:17+00:00
{"dataset_info": {"features": [{"name": "text-idx", "dtype": "int64"}, {"name": "cloze-idx", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "correct", "dtype": "int64"}, {"name": "option-0", "dtype": "string"}, {"name": "option-1", "dtype": "string"}, {"name": "option-2", "dtype": "string"}, {"name": "option-3", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17272, "num_examples": 50}], "download_size": 11911, "dataset_size": 17272}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-15T19:37:45+00:00
[]
[]
TAGS #region-us
# [WIP] Dataset Card for "da-cloze-self-test" *Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*
[ "# [WIP] Dataset Card for \"da-cloze-self-test\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*" ]
[ "TAGS\n#region-us \n", "# [WIP] Dataset Card for \"da-cloze-self-test\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*" ]
[ 6, 47 ]
[ "passage: TAGS\n#region-us \n# [WIP] Dataset Card for \"da-cloze-self-test\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*" ]
60e4f15b379f9897c61ac17979905857a5758dfe
# Dataset Card for "wikipedia-simple" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davidfant/wikipedia-simple
[ "region:us" ]
2023-11-23T22:54:51+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "slug", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "revision_id", "dtype": "int64"}, {"name": "markdown", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2150458756, "num_examples": 338228}], "download_size": 1045572646, "dataset_size": 2150458756}}
2023-11-23T22:57:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikipedia-simple" More Information needed
[ "# Dataset Card for \"wikipedia-simple\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikipedia-simple\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia-simple\"\n\nMore Information needed" ]
099ee143c7fdfa6bd7965be8c801cb161c313b29
# [WIP] Dataset Card for "da-hashtag-twitterhjerne" *Please note that this dataset and dataset card both are works in progress. For now refer to the related [thesis](https://sorenmulli.github.io/thesis/thesis.pdf) for all details*
sorenmulli/da-hashtag-twitterhjerne
[ "region:us" ]
2023-11-23T23:00:27+00:00
{"dataset_info": {"features": [{"name": "Question", "dtype": "string"}, {"name": "Answer 1", "dtype": "string"}, {"name": "Answer 2", "dtype": "string"}, {"name": "Answer 3", "dtype": "string"}, {"name": "Answer 4", "dtype": "string"}, {"name": "Answer 5", "dtype": "string"}, {"name": "Answer 6", "dtype": "string"}, {"name": "Unnamed: 8", "dtype": "string"}, {"name": "Unnamed: 9", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51635, "num_examples": 78}], "download_size": 50291, "dataset_size": 51635}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-15T19:36:27+00:00
[]
[]
TAGS #region-us
# [WIP] Dataset Card for "da-hashtag-twitterhjerne" *Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*
[ "# [WIP] Dataset Card for \"da-hashtag-twitterhjerne\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*" ]
[ "TAGS\n#region-us \n", "# [WIP] Dataset Card for \"da-hashtag-twitterhjerne\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*" ]
[ 6, 48 ]
[ "passage: TAGS\n#region-us \n# [WIP] Dataset Card for \"da-hashtag-twitterhjerne\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*" ]
ef5a85648646dac0450fc1ff1bf4ed16bbd795d8
# Dataset Card for Evaluation run of uukuguy/SynthIA-7B-v1.3-dare-0.85 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/uukuguy/SynthIA-7B-v1.3-dare-0.85 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [uukuguy/SynthIA-7B-v1.3-dare-0.85](https://huggingface.co/uukuguy/SynthIA-7B-v1.3-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_uukuguy__SynthIA-7B-v1.3-dare-0.85_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T22:59:57.395887](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__SynthIA-7B-v1.3-dare-0.85_public/blob/main/results_2023-11-23T22-59-57.395887.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6384101997004026, "acc_stderr": 0.0320658451939497, "acc_norm": 0.6475312994622042, "acc_norm_stderr": 0.032755008534067175, "mc1": 0.2974296205630355, "mc1_stderr": 0.016002651487361, "mc2": 0.4377418572010016, "mc2_stderr": 0.014257418960086683, "em": 0.0018875838926174498, "em_stderr": 0.0004445109990558977, "f1": 0.06350356543624144, "f1_stderr": 0.0013999691906909637 }, "harness|arc:challenge|25": { "acc": 0.5750853242320819, "acc_stderr": 0.014445698968520769, "acc_norm": 0.6100682593856656, "acc_norm_stderr": 0.014252959848892893 }, "harness|hellaswag|10": { "acc": 0.6336387173869747, "acc_stderr": 0.004808251269682433, "acc_norm": 0.8349930292770364, "acc_norm_stderr": 0.00370428239078172 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.04153948404742398, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.04153948404742398 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6578947368421053, "acc_stderr": 0.03860731599316091, "acc_norm": 0.6578947368421053, "acc_norm_stderr": 0.03860731599316091 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7094339622641509, "acc_stderr": 0.027943219989337135, "acc_norm": 0.7094339622641509, "acc_norm_stderr": 0.027943219989337135 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7361111111111112, "acc_stderr": 0.03685651095897532, "acc_norm": 0.7361111111111112, "acc_norm_stderr": 0.03685651095897532 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3431372549019608, "acc_stderr": 0.04724007352383887, "acc_norm": 0.3431372549019608, "acc_norm_stderr": 0.04724007352383887 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146268, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146268 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.041227371113703316, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.041227371113703316 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.40476190476190477, "acc_stderr": 0.0252798503974049, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.0252798503974049 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4126984126984127, "acc_stderr": 0.04403438954768177, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.04403438954768177 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7709677419354839, "acc_stderr": 0.02390491431178265, "acc_norm": 0.7709677419354839, "acc_norm_stderr": 0.02390491431178265 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.032876667586034906, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.032876667586034906 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.028869778460267042, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.028869778460267042 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8756476683937824, "acc_stderr": 0.02381447708659355, "acc_norm": 0.8756476683937824, "acc_norm_stderr": 0.02381447708659355 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131147, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131147 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6512605042016807, "acc_stderr": 0.030956636328566548, "acc_norm": 0.6512605042016807, "acc_norm_stderr": 0.030956636328566548 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.038796870240733264, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.038796870240733264 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8146788990825689, "acc_stderr": 0.01665927970029584, "acc_norm": 0.8146788990825689, "acc_norm_stderr": 0.01665927970029584 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5509259259259259, "acc_stderr": 0.033922384053216174, "acc_norm": 0.5509259259259259, "acc_norm_stderr": 0.033922384053216174 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7990196078431373, "acc_stderr": 0.028125972265654373, "acc_norm": 0.7990196078431373, "acc_norm_stderr": 0.028125972265654373 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7890295358649789, "acc_stderr": 0.02655837250266192, "acc_norm": 0.7890295358649789, "acc_norm_stderr": 0.02655837250266192 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7040358744394619, "acc_stderr": 0.0306365913486998, "acc_norm": 0.7040358744394619, "acc_norm_stderr": 0.0306365913486998 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.03547771004159463, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.03547771004159463 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.0395783547198098, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.0395783547198098 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.803680981595092, "acc_stderr": 0.031207970394709218, "acc_norm": 0.803680981595092, "acc_norm_stderr": 0.031207970394709218 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5089285714285714, "acc_stderr": 0.04745033255489123, "acc_norm": 0.5089285714285714, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.8155339805825242, "acc_stderr": 0.03840423627288276, "acc_norm": 0.8155339805825242, "acc_norm_stderr": 0.03840423627288276 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406953, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406953 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8109833971902938, "acc_stderr": 0.014000791294407006, "acc_norm": 0.8109833971902938, "acc_norm_stderr": 0.014000791294407006 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7225433526011561, "acc_stderr": 0.02410571260775431, "acc_norm": 0.7225433526011561, "acc_norm_stderr": 0.02410571260775431 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.34972067039106147, "acc_stderr": 0.015949308790233645, "acc_norm": 0.34972067039106147, "acc_norm_stderr": 0.015949308790233645 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7009646302250804, "acc_stderr": 0.02600330111788514, "acc_norm": 0.7009646302250804, "acc_norm_stderr": 0.02600330111788514 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7469135802469136, "acc_stderr": 0.024191808600713, "acc_norm": 0.7469135802469136, "acc_norm_stderr": 0.024191808600713 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5141843971631206, "acc_stderr": 0.02981549448368206, "acc_norm": 0.5141843971631206, "acc_norm_stderr": 0.02981549448368206 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4485006518904824, "acc_stderr": 0.012702317490559802, "acc_norm": 0.4485006518904824, "acc_norm_stderr": 0.012702317490559802 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6727941176470589, "acc_stderr": 0.028501452860396556, "acc_norm": 0.6727941176470589, "acc_norm_stderr": 0.028501452860396556 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6830065359477124, "acc_stderr": 0.018824219512706207, "acc_norm": 0.6830065359477124, "acc_norm_stderr": 0.018824219512706207 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.028123429335142783, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.028123429335142783 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8507462686567164, "acc_stderr": 0.025196929874827072, "acc_norm": 0.8507462686567164, "acc_norm_stderr": 0.025196929874827072 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5421686746987951, "acc_stderr": 0.03878626771002361, "acc_norm": 0.5421686746987951, "acc_norm_stderr": 0.03878626771002361 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.2974296205630355, "mc1_stderr": 0.016002651487361, "mc2": 0.4377418572010016, "mc2_stderr": 0.014257418960086683 }, "harness|winogrande|5": { "acc": 0.7892659826361483, "acc_stderr": 0.011462046419710686 }, "harness|drop|3": { "em": 0.0018875838926174498, "em_stderr": 0.0004445109990558977, "f1": 0.06350356543624144, "f1_stderr": 0.0013999691906909637 }, "harness|gsm8k|5": { "acc": 0.18574677786201668, "acc_stderr": 0.010712298902729095 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_uukuguy__SynthIA-7B-v1.3-dare-0.85
[ "region:us" ]
2023-11-23T23:02:57+00:00
{"pretty_name": "Evaluation run of uukuguy/SynthIA-7B-v1.3-dare-0.85", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/SynthIA-7B-v1.3-dare-0.85](https://huggingface.co/uukuguy/SynthIA-7B-v1.3-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__SynthIA-7B-v1.3-dare-0.85_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T22:59:57.395887](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__SynthIA-7B-v1.3-dare-0.85_public/blob/main/results_2023-11-23T22-59-57.395887.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6384101997004026,\n \"acc_stderr\": 0.0320658451939497,\n \"acc_norm\": 0.6475312994622042,\n \"acc_norm_stderr\": 0.032755008534067175,\n \"mc1\": 0.2974296205630355,\n \"mc1_stderr\": 0.016002651487361,\n \"mc2\": 0.4377418572010016,\n \"mc2_stderr\": 0.014257418960086683,\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558977,\n \"f1\": 0.06350356543624144,\n \"f1_stderr\": 0.0013999691906909637\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5750853242320819,\n \"acc_stderr\": 0.014445698968520769,\n \"acc_norm\": 0.6100682593856656,\n \"acc_norm_stderr\": 0.014252959848892893\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6336387173869747,\n \"acc_stderr\": 0.004808251269682433,\n \"acc_norm\": 0.8349930292770364,\n \"acc_norm_stderr\": 0.00370428239078172\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316091,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316091\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337135,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337135\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383887,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383887\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.041227371113703316,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.041227371113703316\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.0252798503974049,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.0252798503974049\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.04403438954768177,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.04403438954768177\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.02390491431178265,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.02390491431178265\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.032876667586034906,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.032876667586034906\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.02381447708659355,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.02381447708659355\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131147,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131147\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566548,\n \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566548\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8146788990825689,\n \"acc_stderr\": 0.01665927970029584,\n \"acc_norm\": 0.8146788990825689,\n \"acc_norm_stderr\": 0.01665927970029584\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n \"acc_stderr\": 0.0306365913486998,\n \"acc_norm\": 0.7040358744394619,\n \"acc_norm_stderr\": 0.0306365913486998\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709218,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709218\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406953,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406953\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n \"acc_stderr\": 0.014000791294407006,\n \"acc_norm\": 0.8109833971902938,\n \"acc_norm_stderr\": 0.014000791294407006\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.02410571260775431,\n \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.02410571260775431\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34972067039106147,\n \"acc_stderr\": 0.015949308790233645,\n \"acc_norm\": 0.34972067039106147,\n \"acc_norm_stderr\": 0.015949308790233645\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600713,\n \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600713\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5141843971631206,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.5141843971631206,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4485006518904824,\n \"acc_stderr\": 0.012702317490559802,\n \"acc_norm\": 0.4485006518904824,\n \"acc_norm_stderr\": 0.012702317490559802\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396556,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396556\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.025196929874827072,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.025196929874827072\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.03878626771002361,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.03878626771002361\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2974296205630355,\n \"mc1_stderr\": 0.016002651487361,\n \"mc2\": 0.4377418572010016,\n \"mc2_stderr\": 0.014257418960086683\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7892659826361483,\n \"acc_stderr\": 0.011462046419710686\n },\n \"harness|drop|3\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558977,\n \"f1\": 0.06350356543624144,\n \"f1_stderr\": 0.0013999691906909637\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.18574677786201668,\n \"acc_stderr\": 0.010712298902729095\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/SynthIA-7B-v1.3-dare-0.85", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|arc:challenge|25_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|drop|3_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|gsm8k|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hellaswag|10_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T22-59-57.395887.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["**/details_harness|winogrande|5_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T22-59-57.395887.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T22_59_57.395887", "path": ["results_2023-11-23T22-59-57.395887.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T22-59-57.395887.parquet"]}]}]}
2023-11-23T23:03:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of uukuguy/SynthIA-7B-v1.3-dare-0.85 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model uukuguy/SynthIA-7B-v1.3-dare-0.85 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T22:59:57.395887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of uukuguy/SynthIA-7B-v1.3-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/SynthIA-7B-v1.3-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T22:59:57.395887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of uukuguy/SynthIA-7B-v1.3-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/SynthIA-7B-v1.3-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T22:59:57.395887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 28, 31, 177, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/SynthIA-7B-v1.3-dare-0.85## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/SynthIA-7B-v1.3-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T22:59:57.395887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
67ce63f742b76776ac3c3031492bd8fe4c90090d
# Dataset Card for Evaluation run of uukuguy/airoboros-m-7b-3.1.2-dare-0.85 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/uukuguy/airoboros-m-7b-3.1.2-dare-0.85 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [uukuguy/airoboros-m-7b-3.1.2-dare-0.85](https://huggingface.co/uukuguy/airoboros-m-7b-3.1.2-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_uukuguy__airoboros-m-7b-3.1.2-dare-0.85_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-23T23:04:08.316762](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__airoboros-m-7b-3.1.2-dare-0.85_public/blob/main/results_2023-11-23T23-04-08.316762.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.634042190068382, "acc_stderr": 0.032283010718078695, "acc_norm": 0.6433235552057146, "acc_norm_stderr": 0.03298195134123534, "mc1": 0.2937576499388005, "mc1_stderr": 0.015945068581236614, "mc2": 0.43638896018594414, "mc2_stderr": 0.01419131146424957, "em": 0.0016778523489932886, "em_stderr": 0.00041913301788268467, "f1": 0.06136954697986581, "f1_stderr": 0.0013699074965009578 }, "harness|arc:challenge|25": { "acc": 0.575938566552901, "acc_stderr": 0.0144418896274644, "acc_norm": 0.6109215017064846, "acc_norm_stderr": 0.014247309976045607 }, "harness|hellaswag|10": { "acc": 0.6330412268472416, "acc_stderr": 0.00480990115123484, "acc_norm": 0.8356901015733917, "acc_norm_stderr": 0.003697992356124477 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252606, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252606 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6222222222222222, "acc_stderr": 0.04188307537595852, "acc_norm": 0.6222222222222222, "acc_norm_stderr": 0.04188307537595852 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6644736842105263, "acc_stderr": 0.03842498559395268, "acc_norm": 0.6644736842105263, "acc_norm_stderr": 0.03842498559395268 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7291666666666666, "acc_stderr": 0.03716177437566017, "acc_norm": 0.7291666666666666, "acc_norm_stderr": 0.03716177437566017 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.04897104952726366, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.04897104952726366 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5702127659574469, "acc_stderr": 0.03236214467715564, "acc_norm": 0.5702127659574469, "acc_norm_stderr": 0.03236214467715564 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3862433862433862, "acc_stderr": 0.02507598176760168, "acc_norm": 0.3862433862433862, "acc_norm_stderr": 0.02507598176760168 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.0436031486007746, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.0436031486007746 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.39, "acc_stderr": 0.04902071300001974, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7677419354838709, "acc_stderr": 0.024022256130308235, "acc_norm": 0.7677419354838709, "acc_norm_stderr": 0.024022256130308235 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.03317505930009182, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.03317505930009182 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7626262626262627, "acc_stderr": 0.0303137105381989, "acc_norm": 0.7626262626262627, "acc_norm_stderr": 0.0303137105381989 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8756476683937824, "acc_stderr": 0.02381447708659355, "acc_norm": 0.8756476683937824, "acc_norm_stderr": 0.02381447708659355 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.658974358974359, "acc_stderr": 0.02403548967633508, "acc_norm": 0.658974358974359, "acc_norm_stderr": 0.02403548967633508 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131143, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131143 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6680672268907563, "acc_stderr": 0.03058869701378364, "acc_norm": 0.6680672268907563, "acc_norm_stderr": 0.03058869701378364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.038796870240733264, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.038796870240733264 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8201834862385321, "acc_stderr": 0.016465345467391545, "acc_norm": 0.8201834862385321, "acc_norm_stderr": 0.016465345467391545 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5509259259259259, "acc_stderr": 0.03392238405321617, "acc_norm": 0.5509259259259259, "acc_norm_stderr": 0.03392238405321617 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7941176470588235, "acc_stderr": 0.028379449451588667, "acc_norm": 0.7941176470588235, "acc_norm_stderr": 0.028379449451588667 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7763713080168776, "acc_stderr": 0.027123298205229966, "acc_norm": 0.7763713080168776, "acc_norm_stderr": 0.027123298205229966 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.0364129708131373, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.0364129708131373 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7851239669421488, "acc_stderr": 0.037494924487096966, "acc_norm": 0.7851239669421488, "acc_norm_stderr": 0.037494924487096966 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.040191074725573483, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7791411042944786, "acc_stderr": 0.03259177392742178, "acc_norm": 0.7791411042944786, "acc_norm_stderr": 0.03259177392742178 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.8252427184466019, "acc_stderr": 0.03760178006026621, "acc_norm": 0.8252427184466019, "acc_norm_stderr": 0.03760178006026621 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8846153846153846, "acc_stderr": 0.020930193185179333, "acc_norm": 0.8846153846153846, "acc_norm_stderr": 0.020930193185179333 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8135376756066411, "acc_stderr": 0.013927751372001505, "acc_norm": 0.8135376756066411, "acc_norm_stderr": 0.013927751372001505 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7138728323699421, "acc_stderr": 0.02433214677913413, "acc_norm": 0.7138728323699421, "acc_norm_stderr": 0.02433214677913413 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3027932960893855, "acc_stderr": 0.01536686038639711, "acc_norm": 0.3027932960893855, "acc_norm_stderr": 0.01536686038639711 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7679738562091504, "acc_stderr": 0.024170840879340863, "acc_norm": 0.7679738562091504, "acc_norm_stderr": 0.024170840879340863 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7041800643086816, "acc_stderr": 0.02592237178881877, "acc_norm": 0.7041800643086816, "acc_norm_stderr": 0.02592237178881877 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7376543209876543, "acc_stderr": 0.024477222856135114, "acc_norm": 0.7376543209876543, "acc_norm_stderr": 0.024477222856135114 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.475177304964539, "acc_stderr": 0.02979071924382972, "acc_norm": 0.475177304964539, "acc_norm_stderr": 0.02979071924382972 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4471968709256845, "acc_stderr": 0.012698825252435108, "acc_norm": 0.4471968709256845, "acc_norm_stderr": 0.012698825252435108 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6764705882352942, "acc_stderr": 0.028418208619406752, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.028418208619406752 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6715686274509803, "acc_stderr": 0.018999707383162673, "acc_norm": 0.6715686274509803, "acc_norm_stderr": 0.018999707383162673 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784593, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784593 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8159203980099502, "acc_stderr": 0.02740385941078685, "acc_norm": 0.8159203980099502, "acc_norm_stderr": 0.02740385941078685 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.036845294917747115, "acc_norm": 0.84, "acc_norm_stderr": 0.036845294917747115 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.02917088550072767, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.02917088550072767 }, "harness|truthfulqa:mc|0": { "mc1": 0.2937576499388005, "mc1_stderr": 0.015945068581236614, "mc2": 0.43638896018594414, "mc2_stderr": 0.01419131146424957 }, "harness|winogrande|5": { "acc": 0.7837411207576953, "acc_stderr": 0.01157061486140935 }, "harness|drop|3": { "em": 0.0016778523489932886, "em_stderr": 0.00041913301788268467, "f1": 0.06136954697986581, "f1_stderr": 0.0013699074965009578 }, "harness|gsm8k|5": { "acc": 0.17437452615617893, "acc_stderr": 0.010451421361976233 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_uukuguy__airoboros-m-7b-3.1.2-dare-0.85
[ "region:us" ]
2023-11-23T23:07:08+00:00
{"pretty_name": "Evaluation run of uukuguy/airoboros-m-7b-3.1.2-dare-0.85", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/airoboros-m-7b-3.1.2-dare-0.85](https://huggingface.co/uukuguy/airoboros-m-7b-3.1.2-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__airoboros-m-7b-3.1.2-dare-0.85_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-23T23:04:08.316762](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__airoboros-m-7b-3.1.2-dare-0.85_public/blob/main/results_2023-11-23T23-04-08.316762.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.634042190068382,\n \"acc_stderr\": 0.032283010718078695,\n \"acc_norm\": 0.6433235552057146,\n \"acc_norm_stderr\": 0.03298195134123534,\n \"mc1\": 0.2937576499388005,\n \"mc1_stderr\": 0.015945068581236614,\n \"mc2\": 0.43638896018594414,\n \"mc2_stderr\": 0.01419131146424957,\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788268467,\n \"f1\": 0.06136954697986581,\n \"f1_stderr\": 0.0013699074965009578\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.575938566552901,\n \"acc_stderr\": 0.0144418896274644,\n \"acc_norm\": 0.6109215017064846,\n \"acc_norm_stderr\": 0.014247309976045607\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6330412268472416,\n \"acc_stderr\": 0.00480990115123484,\n \"acc_norm\": 0.8356901015733917,\n \"acc_norm_stderr\": 0.003697992356124477\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3862433862433862,\n \"acc_stderr\": 0.02507598176760168,\n \"acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.02507598176760168\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.0436031486007746,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.0436031486007746\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7677419354838709,\n \"acc_stderr\": 0.024022256130308235,\n \"acc_norm\": 0.7677419354838709,\n \"acc_norm_stderr\": 0.024022256130308235\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.0303137105381989,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.0303137105381989\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.02381447708659355,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.02381447708659355\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633508,\n \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633508\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8201834862385321,\n \"acc_stderr\": 0.016465345467391545,\n \"acc_norm\": 0.8201834862385321,\n \"acc_norm_stderr\": 0.016465345467391545\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588667,\n \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588667\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.0364129708131373,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.0364129708131373\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8135376756066411,\n \"acc_stderr\": 0.013927751372001505,\n \"acc_norm\": 0.8135376756066411,\n \"acc_norm_stderr\": 0.013927751372001505\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7138728323699421,\n \"acc_stderr\": 0.02433214677913413,\n \"acc_norm\": 0.7138728323699421,\n \"acc_norm_stderr\": 0.02433214677913413\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3027932960893855,\n \"acc_stderr\": 0.01536686038639711,\n \"acc_norm\": 0.3027932960893855,\n \"acc_norm_stderr\": 0.01536686038639711\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7679738562091504,\n \"acc_stderr\": 0.024170840879340863,\n \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.024170840879340863\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n \"acc_stderr\": 0.02592237178881877,\n \"acc_norm\": 0.7041800643086816,\n \"acc_norm_stderr\": 0.02592237178881877\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135114,\n \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135114\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4471968709256845,\n \"acc_stderr\": 0.012698825252435108,\n \"acc_norm\": 0.4471968709256845,\n \"acc_norm_stderr\": 0.012698825252435108\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406752,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406752\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n \"acc_stderr\": 0.02740385941078685,\n \"acc_norm\": 0.8159203980099502,\n \"acc_norm_stderr\": 0.02740385941078685\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.036845294917747115,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.036845294917747115\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2937576499388005,\n \"mc1_stderr\": 0.015945068581236614,\n \"mc2\": 0.43638896018594414,\n \"mc2_stderr\": 0.01419131146424957\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7837411207576953,\n \"acc_stderr\": 0.01157061486140935\n },\n \"harness|drop|3\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788268467,\n \"f1\": 0.06136954697986581,\n \"f1_stderr\": 0.0013699074965009578\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17437452615617893,\n \"acc_stderr\": 0.010451421361976233\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/airoboros-m-7b-3.1.2-dare-0.85", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|arc:challenge|25_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|drop|3_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|gsm8k|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hellaswag|10_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-23T23-04-08.316762.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["**/details_harness|winogrande|5_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-23T23-04-08.316762.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_23T23_04_08.316762", "path": ["results_2023-11-23T23-04-08.316762.parquet"]}, {"split": "latest", "path": ["results_2023-11-23T23-04-08.316762.parquet"]}]}]}
2023-11-23T23:07:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of uukuguy/airoboros-m-7b-3.1.2-dare-0.85 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model uukuguy/airoboros-m-7b-3.1.2-dare-0.85 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-23T23:04:08.316762(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of uukuguy/airoboros-m-7b-3.1.2-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/airoboros-m-7b-3.1.2-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T23:04:08.316762(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of uukuguy/airoboros-m-7b-3.1.2-dare-0.85", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/airoboros-m-7b-3.1.2-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-23T23:04:08.316762(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 29, 31, 178, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/airoboros-m-7b-3.1.2-dare-0.85## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/airoboros-m-7b-3.1.2-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-23T23:04:08.316762(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
f76b0f2aded79038884bf96ee449721e96f5f09d
# Dataset Card for "autotrain-data-s87q-oi1d-wuad" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xwar/autotrain-data-s87q-oi1d-wuad
[ "region:us" ]
2023-11-23T23:19:53+00:00
{"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 393891, "num_examples": 1197}, {"name": "validation", "num_bytes": 393891, "num_examples": 1197}], "download_size": 195874, "dataset_size": 787782}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2023-11-23T23:19:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotrain-data-s87q-oi1d-wuad" More Information needed
[ "# Dataset Card for \"autotrain-data-s87q-oi1d-wuad\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotrain-data-s87q-oi1d-wuad\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-s87q-oi1d-wuad\"\n\nMore Information needed" ]
c938656d6e667ff3b91f08343563d2765c166c49
# Dataset Card for "landmarks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
danielz01/landmarks
[ "region:us" ]
2023-11-23T23:52:02+00:00
{"dataset_info": {"config_name": "NAIP", "features": [{"name": "image", "dtype": "image"}, {"name": "osm_relation_id", "dtype": "int64"}, {"name": "wikidata_entity_id", "dtype": "string"}, {"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "alt", "dtype": "null"}, {"name": "name", "dtype": "string"}, {"name": "instanceOfIDs", "sequence": "string"}, {"name": "instanceOfLabels", "sequence": "string"}, {"name": "distractorIDs", "sequence": "string"}, {"name": "distractorLabels", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3907742505.0, "num_examples": 602}], "download_size": 3907443672, "dataset_size": 3907742505.0}, "configs": [{"config_name": "NAIP", "data_files": [{"split": "train", "path": "NAIP/train-*"}]}]}
2023-11-23T23:57:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "landmarks" More Information needed
[ "# Dataset Card for \"landmarks\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"landmarks\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"landmarks\"\n\nMore Information needed" ]
ba1a27826c626a6908f23446840d9920de351540
# Dataset of iroha (Blue Archive) This is the dataset of iroha (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 150 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 416 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 482 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 150 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 150 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 150 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 416 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 416 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 356 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 482 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 482 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/iroha_bluearchive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-24T00:07:32+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-24T00:08:07+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of iroha (Blue Archive) =============================== This is the dataset of iroha (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
21258b5482ea38389f8f5c26d625a9b86f4e4019
# Hitchhiker's Guide to the Galaxy GPT-4-Turbo generations to elicit responses modelled on the Hitchhiker's Guide to the Galaxy. Add some spice to your LLMs. Enjoy! ![Tess](https://huggingface.co/datasets/migtissera/Hitchhiker/resolve/main/media/Hitchhiker.png)
migtissera/Hitchhiker
[ "license:apache-2.0", "region:us" ]
2023-11-24T00:09:57+00:00
{"license": "apache-2.0"}
2023-11-27T18:33:51+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Hitchhiker's Guide to the Galaxy GPT-4-Turbo generations to elicit responses modelled on the Hitchhiker's Guide to the Galaxy. Add some spice to your LLMs. Enjoy! !Tess
[ "# Hitchhiker's Guide to the Galaxy\n\nGPT-4-Turbo generations to elicit responses modelled on the Hitchhiker's Guide to the Galaxy.\n\nAdd some spice to your LLMs. Enjoy!\n\n!Tess" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Hitchhiker's Guide to the Galaxy\n\nGPT-4-Turbo generations to elicit responses modelled on the Hitchhiker's Guide to the Galaxy.\n\nAdd some spice to your LLMs. Enjoy!\n\n!Tess" ]
[ 14, 55 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n# Hitchhiker's Guide to the Galaxy\n\nGPT-4-Turbo generations to elicit responses modelled on the Hitchhiker's Guide to the Galaxy.\n\nAdd some spice to your LLMs. Enjoy!\n\n!Tess" ]
e77dcaca36c08c9f2201530d27cd289c5985b9c4
# Dataset Card for "sharegpt_instructions_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
justinphan3110/sharegpt_instructions_small
[ "region:us" ]
2023-11-24T00:17:26+00:00
{"dataset_info": {"features": [{"name": "instructions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58210, "num_examples": 424}], "download_size": 40903, "dataset_size": 58210}}
2023-11-24T00:17:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sharegpt_instructions_small" More Information needed
[ "# Dataset Card for \"sharegpt_instructions_small\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sharegpt_instructions_small\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sharegpt_instructions_small\"\n\nMore Information needed" ]
8b3c873ead3c92e964acb24dfc1458236c1b484f
# Dataset Card for Dataset Name This dataset provides around 8,000 prompts in Spanish about short stories. The following is the prompt in english: ``` prompt: Write a short story based on the following title: {{titles}} completion: {{contents}} ``` In spanish: ``` prompt: Escribe una historia corta basada en el siguiente título {{titles}} completion: {{contents}} ``` # More Information This dataset is a sub-version of the original [chico dataset](https://huggingface.co/datasets/snats/chico).
snats/chico_prompts_generate_story
[ "language:es", "license:cc-by-4.0", "region:us" ]
2023-11-24T00:39:13+00:00
{"language": ["es"], "license": "cc-by-4.0"}
2023-11-24T00:50:44+00:00
[]
[ "es" ]
TAGS #language-Spanish #license-cc-by-4.0 #region-us
# Dataset Card for Dataset Name This dataset provides around 8,000 prompts in Spanish about short stories. The following is the prompt in english: In spanish: # More Information This dataset is a sub-version of the original chico dataset.
[ "# Dataset Card for Dataset Name\n\nThis dataset provides around 8,000 prompts in Spanish about short stories.\n\nThe following is the prompt in english:\n\n\n\nIn spanish:", "# More Information\n\nThis dataset is a sub-version of the original chico dataset." ]
[ "TAGS\n#language-Spanish #license-cc-by-4.0 #region-us \n", "# Dataset Card for Dataset Name\n\nThis dataset provides around 8,000 prompts in Spanish about short stories.\n\nThe following is the prompt in english:\n\n\n\nIn spanish:", "# More Information\n\nThis dataset is a sub-version of the original chico dataset." ]
[ 20, 35, 19 ]
[ "passage: TAGS\n#language-Spanish #license-cc-by-4.0 #region-us \n# Dataset Card for Dataset Name\n\nThis dataset provides around 8,000 prompts in Spanish about short stories.\n\nThe following is the prompt in english:\n\n\n\nIn spanish:# More Information\n\nThis dataset is a sub-version of the original chico dataset." ]
63b031fbaa7c8abe7a942dad24151871e482f812
# Dataset of midori (Blue Archive) This is the dataset of midori (Blue Archive), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 556 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 676 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 556 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 556 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 518 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 676 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 676 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/midori_bluearchive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-24T00:42:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-24T00:42:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of midori (Blue Archive) ================================ This is the dataset of midori (Blue Archive), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
8db86460457b325d1cd4b36cf9323fa7f8470e75
# Dataset Card for Dataset Name This dataset provides around 8,000 prompts in Spanish to suggest titles from snippets of short stories. The following is the prompt in english: ``` prompt: Suggest a title for the following story: {{contents}} completion: Sure, here's a suitable title for the given story {{titles}}. ``` In spanish: ``` prompt: Sugiere un título para la siguiente historia: {{contents}} completion: Un título posible para la siguiente historia podría ser: {{titles}} ``` # More Information This dataset is a sub-version of the original [chico dataset](https://huggingface.co/datasets/snats/chico).
snats/chico_prompts_suggest_title
[ "language:es", "license:cc-by-4.0", "region:us" ]
2023-11-24T00:46:15+00:00
{"language": ["es"], "license": "cc-by-4.0"}
2023-11-24T00:49:22+00:00
[]
[ "es" ]
TAGS #language-Spanish #license-cc-by-4.0 #region-us
# Dataset Card for Dataset Name This dataset provides around 8,000 prompts in Spanish to suggest titles from snippets of short stories. The following is the prompt in english: In spanish: # More Information This dataset is a sub-version of the original chico dataset.
[ "# Dataset Card for Dataset Name\n\nThis dataset provides around 8,000 prompts in Spanish to suggest titles from snippets of short stories.\n\nThe following is the prompt in english:\n\n\n\nIn spanish:", "# More Information\n\nThis dataset is a sub-version of the original chico dataset." ]
[ "TAGS\n#language-Spanish #license-cc-by-4.0 #region-us \n", "# Dataset Card for Dataset Name\n\nThis dataset provides around 8,000 prompts in Spanish to suggest titles from snippets of short stories.\n\nThe following is the prompt in english:\n\n\n\nIn spanish:", "# More Information\n\nThis dataset is a sub-version of the original chico dataset." ]
[ 20, 44, 19 ]
[ "passage: TAGS\n#language-Spanish #license-cc-by-4.0 #region-us \n# Dataset Card for Dataset Name\n\nThis dataset provides around 8,000 prompts in Spanish to suggest titles from snippets of short stories.\n\nThe following is the prompt in english:\n\n\n\nIn spanish:# More Information\n\nThis dataset is a sub-version of the original chico dataset." ]
15f6fc446fc22dcc15305c47a91d7a7934e606c5
# Dataset Card for "sharegpt_instructions_small_en_vi_answers" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
justinphan3110/sharegpt_instructions_small_en_vi_answers
[ "region:us" ]
2023-11-24T01:11:14+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "vn", "dtype": "string"}, {"name": "en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 218457, "num_examples": 424}], "download_size": 138882, "dataset_size": 218457}}
2023-11-24T01:11:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sharegpt_instructions_small_en_vi_answers" More Information needed
[ "# Dataset Card for \"sharegpt_instructions_small_en_vi_answers\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sharegpt_instructions_small_en_vi_answers\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sharegpt_instructions_small_en_vi_answers\"\n\nMore Information needed" ]
e67c71158bbf981dba3188c02075fb2c1745c755
# How Many Unicorns Are In This Image? A Safety Evaluation Benchmark For Vision LLMs (Dataset) Paper: https://arxiv.org/abs/2311.16101 Code: https://github.com/UCSC-VLAA/vllm-safety-benchmark The full dataset should looks like this: ``` . ├── ./safety_evaluation_benchmark_datasets// ├── gpt4v_challenging_set # Contains the challenging test data for GPT4V ├── attack_images ├── sketchy_images ├── oodcv_images ├── misleading-attack.json ├── sketchy-vqa-challenging.json └── oodcv-vqa-counterfactual.json ├── redteaming-mislead # Contains the test data for redteaming tasks ├── redteaming_attack ├── gaussian_noise ├── mixattack_eps32 ├── mixattack_eps64 ├── sinattack_eps64_dog ├── sinattack_eps64_coconut ├── sinattack_eps64_spaceship └── annotation.json └── jailbreak_llm # adversarial suffixes for jailbreaking VLLM through LLM └── ood # Contains the test data for OOD scenarios ├── sketchy-vqa ├── sketchy-vqa.json ├── sketchy-challenging.json └── oodcv-vqa ├── oodcv-vqa.json └── oodcv-counterfactual.json ```
PahaII/vllm_safety_evaluation
[ "license:apache-2.0", "arxiv:2311.16101", "region:us" ]
2023-11-24T01:57:59+00:00
{"license": "apache-2.0"}
2023-11-28T02:35:11+00:00
[ "2311.16101" ]
[]
TAGS #license-apache-2.0 #arxiv-2311.16101 #region-us
# How Many Unicorns Are In This Image? A Safety Evaluation Benchmark For Vision LLMs (Dataset) Paper: URL Code: URL The full dataset should looks like this:
[ "# How Many Unicorns Are In This Image? A Safety Evaluation Benchmark For Vision LLMs (Dataset)\n\nPaper: URL\n\nCode: URL\n\nThe full dataset should looks like this:" ]
[ "TAGS\n#license-apache-2.0 #arxiv-2311.16101 #region-us \n", "# How Many Unicorns Are In This Image? A Safety Evaluation Benchmark For Vision LLMs (Dataset)\n\nPaper: URL\n\nCode: URL\n\nThe full dataset should looks like this:" ]
[ 23, 42 ]
[ "passage: TAGS\n#license-apache-2.0 #arxiv-2311.16101 #region-us \n# How Many Unicorns Are In This Image? A Safety Evaluation Benchmark For Vision LLMs (Dataset)\n\nPaper: URL\n\nCode: URL\n\nThe full dataset should looks like this:" ]
e452bf1c7db9add8257efecd4553fe34a1476bb8
# Dataset Card for "en_thai_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Konthee/en_thai_small
[ "region:us" ]
2023-11-24T03:22:52+00:00
{"dataset_info": {"features": [{"name": "src_input_ids", "sequence": "int64"}, {"name": "src_attention_mask", "sequence": "int64"}, {"name": "trg_input_ids", "sequence": "int64"}, {"name": "trg_attention_mask", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2747840, "num_examples": 1108}], "download_size": 89577, "dataset_size": 2747840}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T09:08:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "en_thai_small" More Information needed
[ "# Dataset Card for \"en_thai_small\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"en_thai_small\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"en_thai_small\"\n\nMore Information needed" ]
b843886c7a8f4671d26254f42befd81e3b00ad91
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ## Performance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/HTwIzw6RDha2-PhuWqSuI.png) ## Citation If you find Taiwan LLM is useful in your work, please cite it with: ``` @misc{zheng2023judging, title={Judging LLM-as-a-judge with MT-Bench and Chatbot Arena}, author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric. P Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica}, year={2023}, eprint={2306.05685}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{lin2023taiwan, title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model}, author={Yen-Ting Lin and Yun-Nung Chen}, year={2023}, eprint={2311.17487}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
yentinglin/Taiwan-Bench
[ "task_categories:table-question-answering", "task_categories:question-answering", "task_categories:text-generation", "size_categories:1K<n<10K", "language:zh", "license:apache-2.0", "arxiv:2306.05685", "arxiv:2311.17487", "region:us" ]
2023-11-24T03:23:40+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["table-question-answering", "question-answering", "text-generation"], "pretty_name": "TWMTBench", "data_files": [{"split": "test", "path": "Taiwan-MT-Bench.jsonl"}]}
2024-01-05T14:18:52+00:00
[ "2306.05685", "2311.17487" ]
[ "zh" ]
TAGS #task_categories-table-question-answering #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-apache-2.0 #arxiv-2306.05685 #arxiv-2311.17487 #region-us
<img src="URL alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ## Performance !image/png If you find Taiwan LLM is useful in your work, please cite it with:
[ "## Performance\n\n\n!image/png\n\n\nIf you find Taiwan LLM is useful in your work, please cite it with:" ]
[ "TAGS\n#task_categories-table-question-answering #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-apache-2.0 #arxiv-2306.05685 #arxiv-2311.17487 #region-us \n", "## Performance\n\n\n!image/png\n\n\nIf you find Taiwan LLM is useful in your work, please cite it with:" ]
[ 85, 23 ]
[ "passage: TAGS\n#task_categories-table-question-answering #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-apache-2.0 #arxiv-2306.05685 #arxiv-2311.17487 #region-us \n## Performance\n\n\n!image/png\n\n\nIf you find Taiwan LLM is useful in your work, please cite it with:" ]
dfce4b55fb22acca9f5310c6f0040507dd217615
# Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models <!-- Provide a quick summary of the dataset. --> The purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> Users may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results. + **Curated by**: <a href='https://wln20.github.io'>Luning Wang</a> + **Language(s)**: English, Chinese(Simplified, Traditional), Japanse, Spanish, German, Russian + **License**: Apache-2.0 ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/wln20/Retrieval_QA - **Paper:** TBD - **Demo:** TBD ## Uses The dataset is available on 🤗 Huggingface, you can conveniently use it in python with 🤗 Datasets: ```python from datasets import load_dataset dataset_en = load_dataset('lnwang/retrieval_qa', name='en') # dataset_zh_cn = load_dataset('lnwang/retrieval_qa', name='zh_cn') # dataset_zh_tw = load_dataset('lnwang/retrieval_qa', name='zh_tw') ``` Now we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw), Japanese(ja), Spanish(es), German(de), Russian(ru). You can specify the `name` argument in `load_dataset()` to get the corresponding subset. For more usages, please follow the examples in the github repository of this project. ## Dataset Creation The raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial and incorrect information.
lnwang/retrieval_qa
[ "size_categories:1K<n<10K", "language:en", "language:zh", "language:ja", "language:es", "language:de", "language:ru", "license:apache-2.0", "art", "region:us" ]
2023-11-24T03:26:11+00:00
{"language": ["en", "zh", "ja", "es", "de", "ru"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "dataset_info": [{"config_name": "de", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 268775, "num_examples": 196}], "download_size": 0, "dataset_size": 268775}, {"config_name": "default", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 233289, "num_examples": 196}], "download_size": 0, "dataset_size": 233289}, {"config_name": "en", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 233289, "num_examples": 196}], "download_size": 0, "dataset_size": 233289}, {"config_name": "es", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 267456, "num_examples": 196}], "download_size": 0, "dataset_size": 267456}, {"config_name": "ja", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 268010, "num_examples": 196}], "download_size": 0, "dataset_size": 268010}, {"config_name": "ru", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 413438, "num_examples": 196}], "download_size": 191766, "dataset_size": 413438}, {"config_name": "zh_cn", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 200707, "num_examples": 196}], "download_size": 0, "dataset_size": 200707}, {"config_name": "zh_tw", "features": [{"name": "region", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "choice", "sequence": {"sequence": "string"}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 201205, "num_examples": 196}], "download_size": 0, "dataset_size": 201205}], "configs": [{"config_name": "de", "data_files": [{"split": "test", "path": "de/test-*"}]}, {"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}, {"config_name": "en", "data_files": [{"split": "test", "path": "en/test-*"}]}, {"config_name": "es", "data_files": [{"split": "test", "path": "es/test-*"}]}, {"config_name": "ja", "data_files": [{"split": "test", "path": "ja/test-*"}]}, {"config_name": "ru", "data_files": [{"split": "test", "path": "ru/test-*"}]}, {"config_name": "zh_cn", "data_files": [{"split": "test", "path": "zh_cn/test-*"}]}, {"config_name": "zh_tw", "data_files": [{"split": "test", "path": "zh_tw/test-*"}]}], "tags": ["art"]}
2023-12-22T07:24:23+00:00
[]
[ "en", "zh", "ja", "es", "de", "ru" ]
TAGS #size_categories-1K<n<10K #language-English #language-Chinese #language-Japanese #language-Spanish #language-German #language-Russian #license-apache-2.0 #art #region-us
# Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models The purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document. ## Dataset Details ### Dataset Description Users may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results. + Curated by: <a href='URL'>Luning Wang</a> + Language(s): English, Chinese(Simplified, Traditional), Japanse, Spanish, German, Russian + License: Apache-2.0 ### Dataset Sources - Repository: URL - Paper: TBD - Demo: TBD ## Uses The dataset is available on Huggingface, you can conveniently use it in python with Datasets: Now we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw), Japanese(ja), Spanish(es), German(de), Russian(ru). You can specify the 'name' argument in 'load_dataset()' to get the corresponding subset. For more usages, please follow the examples in the github repository of this project. ## Dataset Creation The raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial and incorrect information.
[ "# Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models\n\n\n\nThe purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document.", "## Dataset Details", "### Dataset Description\n\n\nUsers may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results.\n\n\n+ Curated by: <a href='URL'>Luning Wang</a>\n\n+ Language(s): English, Chinese(Simplified, Traditional), Japanse, Spanish, German, Russian\n \n+ License: Apache-2.0", "### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: TBD\n- Demo: TBD", "## Uses\nThe dataset is available on Huggingface, you can conveniently use it in python with Datasets:\n\nNow we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw), Japanese(ja), Spanish(es), German(de), Russian(ru). You can specify the 'name' argument in 'load_dataset()' to get the corresponding subset.\n\nFor more usages, please follow the examples in the github repository of this project.", "## Dataset Creation\nThe raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial and incorrect information." ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #language-Chinese #language-Japanese #language-Spanish #language-German #language-Russian #license-apache-2.0 #art #region-us \n", "# Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models\n\n\n\nThe purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document.", "## Dataset Details", "### Dataset Description\n\n\nUsers may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results.\n\n\n+ Curated by: <a href='URL'>Luning Wang</a>\n\n+ Language(s): English, Chinese(Simplified, Traditional), Japanse, Spanish, German, Russian\n \n+ License: Apache-2.0", "### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: TBD\n- Demo: TBD", "## Uses\nThe dataset is available on Huggingface, you can conveniently use it in python with Datasets:\n\nNow we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw), Japanese(ja), Spanish(es), German(de), Russian(ru). You can specify the 'name' argument in 'load_dataset()' to get the corresponding subset.\n\nFor more usages, please follow the examples in the github repository of this project.", "## Dataset Creation\nThe raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial and incorrect information." ]
[ 57, 149, 4, 111, 22, 122, 42 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #language-Chinese #language-Japanese #language-Spanish #language-German #language-Russian #license-apache-2.0 #art #region-us \n# Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models\n\n\n\nThe purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document.## Dataset Details### Dataset Description\n\n\nUsers may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results.\n\n\n+ Curated by: <a href='URL'>Luning Wang</a>\n\n+ Language(s): English, Chinese(Simplified, Traditional), Japanse, Spanish, German, Russian\n \n+ License: Apache-2.0### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: TBD\n- Demo: TBD## Uses\nThe dataset is available on Huggingface, you can conveniently use it in python with Datasets:\n\nNow we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw), Japanese(ja), Spanish(es), German(de), Russian(ru). You can specify the 'name' argument in 'load_dataset()' to get the corresponding subset.\n\nFor more usages, please follow the examples in the github repository of this project.## Dataset Creation\nThe raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial and incorrect information." ]
aa8b7d42362213e740519b007ffcc8cbe19b6fd5
I added the longer context samples from erotica-analysys and removed the smaller samples from the original erotiquant. The minimum context size is 8000 per sample.
openerotica/erotiquant-xl
[ "license:apache-2.0", "region:us" ]
2023-11-24T03:46:10+00:00
{"license": "apache-2.0"}
2023-11-24T03:50:06+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
I added the longer context samples from erotica-analysys and removed the smaller samples from the original erotiquant. The minimum context size is 8000 per sample.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
88a7f9eed6f6c76df4ed656ed4f0018bf3ebd86e
# Dataset Card for "platypus-flat" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sordonia/platypus-flat
[ "region:us" ]
2023-11-24T03:50:31+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31205349, "num_examples": 24926}], "download_size": 15583989, "dataset_size": 31205349}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T04:34:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "platypus-flat" More Information needed
[ "# Dataset Card for \"platypus-flat\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"platypus-flat\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"platypus-flat\"\n\nMore Information needed" ]
9185db7962df788e8004c40441587d9281c8e8b3
# Dataset Card for "webglm_vi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nguyenthanhdo/webglm_vi
[ "region:us" ]
2023-11-24T04:04:11+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "references", "sequence": "string"}, {"name": "translated", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 154260166, "num_examples": 43579}], "download_size": 77675307, "dataset_size": 154260166}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T04:04:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "webglm_vi" More Information needed
[ "# Dataset Card for \"webglm_vi\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"webglm_vi\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"webglm_vi\"\n\nMore Information needed" ]
2bf8dc52ce466b083f3db9f618190a967d9820a7
# Dataset Card for "a36c16e4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/a36c16e4
[ "region:us" ]
2023-11-24T04:21:24+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 170, "num_examples": 10}], "download_size": 1325, "dataset_size": 170}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T04:21:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "a36c16e4" More Information needed
[ "# Dataset Card for \"a36c16e4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"a36c16e4\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"a36c16e4\"\n\nMore Information needed" ]
1d663714254e8fcb0f8533d7da066e5ae8803efc
# Dataset Card for "ultrachat-32c-10k-flat" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sordonia/ultrachat-32c-10k-flat
[ "region:us" ]
2023-11-24T04:28:45+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 737780646, "num_examples": 320000}], "download_size": 417812898, "dataset_size": 737780646}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T15:53:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrachat-32c-10k-flat" More Information needed
[ "# Dataset Card for \"ultrachat-32c-10k-flat\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrachat-32c-10k-flat\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ultrachat-32c-10k-flat\"\n\nMore Information needed" ]
c01b7332853fb0df8d558019987d6476793c7fc8
# Used datasets: ## sordonia/flan-10k-flat ## sordonia/mmlu-qa-flat ## sordonia/platypus-flat ## sordonia/ultrachat-32c-10k-flat ## Total number of tasks: 439
sordonia/adauni-v1-flat
[ "region:us" ]
2023-11-24T04:46:05+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7385230805, "num_examples": 3928352}], "download_size": 0, "dataset_size": 7385230805}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-30T23:35:23+00:00
[]
[]
TAGS #region-us
# Used datasets: ## sordonia/flan-10k-flat ## sordonia/mmlu-qa-flat ## sordonia/platypus-flat ## sordonia/ultrachat-32c-10k-flat ## Total number of tasks: 439
[ "# Used datasets:", "## sordonia/flan-10k-flat", "## sordonia/mmlu-qa-flat", "## sordonia/platypus-flat", "## sordonia/ultrachat-32c-10k-flat", "## Total number of tasks: 439" ]
[ "TAGS\n#region-us \n", "# Used datasets:", "## sordonia/flan-10k-flat", "## sordonia/mmlu-qa-flat", "## sordonia/platypus-flat", "## sordonia/ultrachat-32c-10k-flat", "## Total number of tasks: 439" ]
[ 6, 7, 10, 11, 10, 14, 9 ]
[ "passage: TAGS\n#region-us \n# Used datasets:## sordonia/flan-10k-flat## sordonia/mmlu-qa-flat## sordonia/platypus-flat## sordonia/ultrachat-32c-10k-flat## Total number of tasks: 439" ]
48c503063eb8039c76f2deefb32b83f8e9cd49ea
Data sample for testing DL code
lauransotomayor/eco_composition
[ "license:mit", "region:us" ]
2023-11-24T04:56:04+00:00
{"license": "mit"}
2023-11-24T05:07:40+00:00
[]
[]
TAGS #license-mit #region-us
Data sample for testing DL code
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
e0d8cd69ef689b56da414a55daee45f552a26a3d
## Dataset Details - Welcome to the Single-Speaker Mandarin Audio Dataset! This dataset is a curated subset extracted from a larger collection, focusing on audio recordings of a single speaker. Each audio file is accompanied by valuable linguistic annotations, including Pinyin transcriptions, tone information, and onset and offset details. ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Speaker:** The dataset exclusively features recordings of a single Mandarin speaker, providing consistency for various linguistic analyses and applications. - **Pinyin Transcriptions:** Each audio file comes with a corresponding Pinyin transcription, offering a phonetic representation of the spoken Mandarin. - **Tone Information:** Tone annotations are included to capture the tonal characteristics of the spoken language. This feature is essential for tone-related studies and applications. - **Onset and Offset Details:** Precise information about the onset and offset of each audio segment is provided. This allows for accurate segmentation and analysis of the spoken content. ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - Subset of the Original Kaggle Dataset ## Uses - Use for model evaluation or demo
CS5647Team3/data_mini
[ "task_categories:token-classification", "size_categories:100M<n<1B", "language:zh", "tone", "pinyin", "sentence", "audio", "region:us" ]
2023-11-24T05:04:37+00:00
{"language": ["zh"], "size_categories": ["100M<n<1B"], "task_categories": ["token-classification"], "tags": ["tone", "pinyin", "sentence", "audio"]}
2023-11-24T05:16:24+00:00
[]
[ "zh" ]
TAGS #task_categories-token-classification #size_categories-100M<n<1B #language-Chinese #tone #pinyin #sentence #audio #region-us
## Dataset Details - Welcome to the Single-Speaker Mandarin Audio Dataset! This dataset is a curated subset extracted from a larger collection, focusing on audio recordings of a single speaker. Each audio file is accompanied by valuable linguistic annotations, including Pinyin transcriptions, tone information, and onset and offset details. ### Dataset Description - Speaker: The dataset exclusively features recordings of a single Mandarin speaker, providing consistency for various linguistic analyses and applications. - Pinyin Transcriptions: Each audio file comes with a corresponding Pinyin transcription, offering a phonetic representation of the spoken Mandarin. - Tone Information: Tone annotations are included to capture the tonal characteristics of the spoken language. This feature is essential for tone-related studies and applications. - Onset and Offset Details: Precise information about the onset and offset of each audio segment is provided. This allows for accurate segmentation and analysis of the spoken content. ### Dataset Sources [optional] - Subset of the Original Kaggle Dataset ## Uses - Use for model evaluation or demo
[ "## Dataset Details\n- Welcome to the Single-Speaker Mandarin Audio Dataset! This dataset is a curated subset extracted from a larger collection, focusing on audio recordings of a single speaker. Each audio file is accompanied by valuable linguistic annotations, including Pinyin transcriptions, tone information, and onset and offset details.", "### Dataset Description\n\n\n\n\n\n- Speaker: The dataset exclusively features recordings of a single Mandarin speaker, providing consistency for various linguistic analyses and applications.\n- Pinyin Transcriptions: Each audio file comes with a corresponding Pinyin transcription, offering a phonetic representation of the spoken Mandarin.\n- Tone Information: Tone annotations are included to capture the tonal characteristics of the spoken language. This feature is essential for tone-related studies and applications.\n- Onset and Offset Details: Precise information about the onset and offset of each audio segment is provided. This allows for accurate segmentation and analysis of the spoken content.", "### Dataset Sources [optional]\n\n\n\n- Subset of the Original Kaggle Dataset", "## Uses\n\n- Use for model evaluation or demo" ]
[ "TAGS\n#task_categories-token-classification #size_categories-100M<n<1B #language-Chinese #tone #pinyin #sentence #audio #region-us \n", "## Dataset Details\n- Welcome to the Single-Speaker Mandarin Audio Dataset! This dataset is a curated subset extracted from a larger collection, focusing on audio recordings of a single speaker. Each audio file is accompanied by valuable linguistic annotations, including Pinyin transcriptions, tone information, and onset and offset details.", "### Dataset Description\n\n\n\n\n\n- Speaker: The dataset exclusively features recordings of a single Mandarin speaker, providing consistency for various linguistic analyses and applications.\n- Pinyin Transcriptions: Each audio file comes with a corresponding Pinyin transcription, offering a phonetic representation of the spoken Mandarin.\n- Tone Information: Tone annotations are included to capture the tonal characteristics of the spoken language. This feature is essential for tone-related studies and applications.\n- Onset and Offset Details: Precise information about the onset and offset of each audio segment is provided. This allows for accurate segmentation and analysis of the spoken content.", "### Dataset Sources [optional]\n\n\n\n- Subset of the Original Kaggle Dataset", "## Uses\n\n- Use for model evaluation or demo" ]
[ 46, 79, 145, 20, 10 ]
[ "passage: TAGS\n#task_categories-token-classification #size_categories-100M<n<1B #language-Chinese #tone #pinyin #sentence #audio #region-us \n## Dataset Details\n- Welcome to the Single-Speaker Mandarin Audio Dataset! This dataset is a curated subset extracted from a larger collection, focusing on audio recordings of a single speaker. Each audio file is accompanied by valuable linguistic annotations, including Pinyin transcriptions, tone information, and onset and offset details.### Dataset Description\n\n\n\n\n\n- Speaker: The dataset exclusively features recordings of a single Mandarin speaker, providing consistency for various linguistic analyses and applications.\n- Pinyin Transcriptions: Each audio file comes with a corresponding Pinyin transcription, offering a phonetic representation of the spoken Mandarin.\n- Tone Information: Tone annotations are included to capture the tonal characteristics of the spoken language. This feature is essential for tone-related studies and applications.\n- Onset and Offset Details: Precise information about the onset and offset of each audio segment is provided. This allows for accurate segmentation and analysis of the spoken content.### Dataset Sources [optional]\n\n\n\n- Subset of the Original Kaggle Dataset## Uses\n\n- Use for model evaluation or demo" ]
b2bc3d4812126154dd22a61e34677572dad35c6c
# Ayat Aktif to Ayat Pasif Generate using ChatGPT4, originally from https://soalanspm.com/ayat-aktif-dan-ayat-pasif/ Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/paraphrase/chatgpt4-ayat-aktif-pasif - [synthetic-ayat-aktif-pasif.jsonl](synthetic-ayat-aktif-pasif.jsonl), 1524 rows, 248KB. ## Example data ```python {'s': 'Ayat Aktif: Encik Razak mengajar pelajar-pelajar tentang kepentingan menjaga alam sekitar.\nAyat Pasif: Pelajar-pelajar diajar tentang kepentingan menjaga alam sekitar oleh Encik Razak.'} ```
mesolitica/chatgpt4-ayat-aktif-pasif
[ "language:ms", "region:us" ]
2023-11-24T05:21:05+00:00
{"language": ["ms"]}
2024-02-02T06:08:57+00:00
[]
[ "ms" ]
TAGS #language-Malay (macrolanguage) #region-us
# Ayat Aktif to Ayat Pasif Generate using ChatGPT4, originally from URL Notebooks at URL - URL, 1524 rows, 248KB. ## Example data
[ "# Ayat Aktif to Ayat Pasif\n\nGenerate using ChatGPT4, originally from URL\n\nNotebooks at URL\n\n- URL, 1524 rows, 248KB.", "## Example data" ]
[ "TAGS\n#language-Malay (macrolanguage) #region-us \n", "# Ayat Aktif to Ayat Pasif\n\nGenerate using ChatGPT4, originally from URL\n\nNotebooks at URL\n\n- URL, 1524 rows, 248KB.", "## Example data" ]
[ 16, 37, 4 ]
[ "passage: TAGS\n#language-Malay (macrolanguage) #region-us \n# Ayat Aktif to Ayat Pasif\n\nGenerate using ChatGPT4, originally from URL\n\nNotebooks at URL\n\n- URL, 1524 rows, 248KB.## Example data" ]
f5318f276a2ee8fc1f7e8e71b4b2a34b06c40a4d
You can go to Kaggle to find the full amount of the dataset Paddle Speech -> AISHELL-3 -> Train https://www.kaggle.com/datasets/zenbot99/paddle-speech/
CS5647Team3/full_dataset
[ "task_categories:text-classification", "language:zh", "tone", "pinyin", "region:us" ]
2023-11-24T05:26:28+00:00
{"language": ["zh"], "task_categories": ["text-classification"], "tags": ["tone", "pinyin"]}
2023-11-24T05:28:48+00:00
[]
[ "zh" ]
TAGS #task_categories-text-classification #language-Chinese #tone #pinyin #region-us
You can go to Kaggle to find the full amount of the dataset Paddle Speech -> AISHELL-3 -> Train URL
[]
[ "TAGS\n#task_categories-text-classification #language-Chinese #tone #pinyin #region-us \n" ]
[ 27 ]
[ "passage: TAGS\n#task_categories-text-classification #language-Chinese #tone #pinyin #region-us \n" ]
b21621ac319709ae52bf83f256442f3f578496ef
# Dataset Card for "UL_GLOBAL_CF" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sshreyy/UL_GLOBAL_CF
[ "region:us" ]
2023-11-24T05:47:04+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 19658992, "num_examples": 8521}, {"name": "test", "num_bytes": 1014574, "num_examples": 447}], "download_size": 7376137, "dataset_size": 20673566}}
2023-11-24T05:48:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "UL_GLOBAL_CF" More Information needed
[ "# Dataset Card for \"UL_GLOBAL_CF\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"UL_GLOBAL_CF\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"UL_GLOBAL_CF\"\n\nMore Information needed" ]
dd6633d349bd2d6111fa19a324e51e3d342fc3a7
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Nimitzzz/LLamadatademo
[ "region:us" ]
2023-11-24T06:02:33+00:00
{}
2023-11-24T06:07:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
b2bd6b65f7539cf7006988192d2948703f096412
# Dataset Card for "snli-contrast" This dataset is the [snli-3way](https://huggingface.co/datasets/AntoineBlanot/snli-3way) dataset with an additional `instruction` feature. This new feature along with its related `label_name` expresses how the `premise` and `hypothesis` features are related in the original dataset. The following explains how the mapping is done: ### If the original example was of class `entailment` Two data points will be related to that example. One is the positive example (i.e., `label_name` == "positive") which assign to it the folowing instruction: "The meaning of the hypothesis is logically inferred from the meaning of the premise." The other is the negative example (i.e., `label_name` == "negative") which assign to it the folowing instruction: "The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise." ### If the original example was of class `contradiction` or `neutral` Two data points will be related to that example. One is the positive example (i.e., `label_name` == "positive") which assign to it the folowing instruction: "The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise." The other is the negative example (i.e., `label_name` == "negative") which assign to it the folowing instruction: "The meaning of the hypothesis is logically inferred from the meaning of the premise." This dataset is double the size of this original dataset because each is related to a positive and negative instruction. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AntoineBlanot/snli-contrast
[ "region:us" ]
2023-11-24T06:04:57+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "label_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 283196540, "num_examples": 1098734}, {"name": "test", "num_bytes": 5199496, "num_examples": 19684}], "download_size": 23437414, "dataset_size": 288396036}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-11-24T06:14:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "snli-contrast" This dataset is the snli-3way dataset with an additional 'instruction' feature. This new feature along with its related 'label_name' expresses how the 'premise' and 'hypothesis' features are related in the original dataset. The following explains how the mapping is done: ### If the original example was of class 'entailment' Two data points will be related to that example. One is the positive example (i.e., 'label_name' == "positive") which assign to it the folowing instruction: "The meaning of the hypothesis is logically inferred from the meaning of the premise." The other is the negative example (i.e., 'label_name' == "negative") which assign to it the folowing instruction: "The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise." ### If the original example was of class 'contradiction' or 'neutral' Two data points will be related to that example. One is the positive example (i.e., 'label_name' == "positive") which assign to it the folowing instruction: "The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise." The other is the negative example (i.e., 'label_name' == "negative") which assign to it the folowing instruction: "The meaning of the hypothesis is logically inferred from the meaning of the premise." This dataset is double the size of this original dataset because each is related to a positive and negative instruction. More Information needed
[ "# Dataset Card for \"snli-contrast\"\nThis dataset is the snli-3way dataset with an additional 'instruction' feature.\nThis new feature along with its related 'label_name' expresses how the 'premise' and 'hypothesis' features are related in the original dataset.\n\nThe following explains how the mapping is done:", "### If the original example was of class 'entailment'\nTwo data points will be related to that example.\n\nOne is the positive example (i.e., 'label_name' == \"positive\") which assign to it the folowing instruction: \"The meaning of the hypothesis is logically inferred from the meaning of the premise.\"\nThe other is the negative example (i.e., 'label_name' == \"negative\") which assign to it the folowing instruction: \"The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise.\"", "### If the original example was of class 'contradiction' or 'neutral'\nTwo data points will be related to that example.\n\nOne is the positive example (i.e., 'label_name' == \"positive\") which assign to it the folowing instruction: \"The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise.\"\nThe other is the negative example (i.e., 'label_name' == \"negative\") which assign to it the folowing instruction: \"The meaning of the hypothesis is logically inferred from the meaning of the premise.\"\n\nThis dataset is double the size of this original dataset because each is related to a positive and negative instruction.\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"snli-contrast\"\nThis dataset is the snli-3way dataset with an additional 'instruction' feature.\nThis new feature along with its related 'label_name' expresses how the 'premise' and 'hypothesis' features are related in the original dataset.\n\nThe following explains how the mapping is done:", "### If the original example was of class 'entailment'\nTwo data points will be related to that example.\n\nOne is the positive example (i.e., 'label_name' == \"positive\") which assign to it the folowing instruction: \"The meaning of the hypothesis is logically inferred from the meaning of the premise.\"\nThe other is the negative example (i.e., 'label_name' == \"negative\") which assign to it the folowing instruction: \"The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise.\"", "### If the original example was of class 'contradiction' or 'neutral'\nTwo data points will be related to that example.\n\nOne is the positive example (i.e., 'label_name' == \"positive\") which assign to it the folowing instruction: \"The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise.\"\nThe other is the negative example (i.e., 'label_name' == \"negative\") which assign to it the folowing instruction: \"The meaning of the hypothesis is logically inferred from the meaning of the premise.\"\n\nThis dataset is double the size of this original dataset because each is related to a positive and negative instruction.\n\nMore Information needed" ]
[ 6, 80, 151, 182 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"snli-contrast\"\nThis dataset is the snli-3way dataset with an additional 'instruction' feature.\nThis new feature along with its related 'label_name' expresses how the 'premise' and 'hypothesis' features are related in the original dataset.\n\nThe following explains how the mapping is done:### If the original example was of class 'entailment'\nTwo data points will be related to that example.\n\nOne is the positive example (i.e., 'label_name' == \"positive\") which assign to it the folowing instruction: \"The meaning of the hypothesis is logically inferred from the meaning of the premise.\"\nThe other is the negative example (i.e., 'label_name' == \"negative\") which assign to it the folowing instruction: \"The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise.\"### If the original example was of class 'contradiction' or 'neutral'\nTwo data points will be related to that example.\n\nOne is the positive example (i.e., 'label_name' == \"positive\") which assign to it the folowing instruction: \"The meaning of the hypothesis either contradicts the meaning of the premise, is unrelated to it, or does not provide sufficient information to infer the meaning of the premise.\"\nThe other is the negative example (i.e., 'label_name' == \"negative\") which assign to it the folowing instruction: \"The meaning of the hypothesis is logically inferred from the meaning of the premise.\"\n\nThis dataset is double the size of this original dataset because each is related to a positive and negative instruction.\n\nMore Information needed" ]
700748e642e47af865610886f20724dd7b215be1
# HRC: Building a human rights corpus for interactive generation models 데이터 원본: https://github.com/human-rights-corpus/HRC/ <br/> 제가 제작에 참여한 데이터가 아닙니다. ``` @inproceedings{song2023}, author = {송영숙 and 심상진 and 김성현}, title = {대화형 생성 모델을 위한 인권 코퍼스 구축}, booktitle = {한글 및 한국어 정보처리 학술대회 발표 예정)}, year = {2023}, publisher = {한글 및 한국어 정보처리 학회} } ```
heegyu/HRC
[ "license:cc-by-sa-4.0", "region:us" ]
2023-11-24T06:20:49+00:00
{"license": "cc-by-sa-4.0"}
2023-11-24T06:22:54+00:00
[]
[]
TAGS #license-cc-by-sa-4.0 #region-us
# HRC: Building a human rights corpus for interactive generation models 데이터 원본: URL <br/> 제가 제작에 참여한 데이터가 아닙니다.
[ "# HRC: Building a human rights corpus for interactive generation models\n데이터 원본: URL <br/>\n제가 제작에 참여한 데이터가 아닙니다." ]
[ "TAGS\n#license-cc-by-sa-4.0 #region-us \n", "# HRC: Building a human rights corpus for interactive generation models\n데이터 원본: URL <br/>\n제가 제작에 참여한 데이터가 아닙니다." ]
[ 17, 31 ]
[ "passage: TAGS\n#license-cc-by-sa-4.0 #region-us \n# HRC: Building a human rights corpus for interactive generation models\n데이터 원본: URL <br/>\n제가 제작에 참여한 데이터가 아닙니다." ]
84ca599d6e40ec9256e937635e11f349fec1ee7e
# Dataset of mari (Blue Archive) This is the dataset of mari (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 150 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 409 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 475 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 150 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 150 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 150 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 409 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 409 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 324 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 475 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 475 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/mari_bluearchive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-24T06:27:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-24T06:27:23+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mari (Blue Archive) ============================== This is the dataset of mari (Blue Archive), containing 150 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
2cf22434089ece656c3cf873438203cde579249d
# Dataset Card for RegexEval <!-- Provide a quick summary of the dataset. --> Re(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests. ## Dataset Details ### Dataset Description - **Curated by:** Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos - **Language(s):** English ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/s2e-lab/RegexEval - **Paper:** https://s2e-lab.github.io/preprints/icse_nier24-preprint.pdf ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> - dataset.jsonl: dataset file in jsonl format. Every line contains a JSON object with the following fields: - `id`: unique identifier of the sample. - `raw_prompt`: Raw/original prompt from the real users with the description of the RegEx. - `refined_prompt`: Refined prompt with the description of the RegEx. - `matches`: Matches examples for the RegEx. - `non-matches`: Non-matches examples for the RegEx. ## Dataset Creation ### Source Data We mined (on Aug. 16th, 2023) all the regexes from [RegExLib](https://regexlib.com/), a regular expression library. We use this library because it contains user-contributed regular expressions. We obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings. #### Data Collection and Processing For each sample previously collected, we perform a manual validation to (1) filter out incorrect regexes, (2) create more sample test cases (i.e., matching and non-matching string examples), and (3) create refined problem descriptions (i.e., prompts). We excluded any regex that matched one or more of the following conditions: (i) it was missing any metadata, i.e., description and/or list of expected matches and non- matches; (ii) its description is not written in English; (iii) its description included vulgar words; (iv) its description does not provide sufficient information to understand the purpose of the regular expression; (v) it aimed to detect just one word; (vi) it is incorrect (i.e., the regex matches a string that is not supposed to match, or it does not match a string that is expected to match). After this step, we have 1,001 regex samples. Each collected regex sample had (on average) only 4 string examples (2 that are expected matches and 2 that are expected non-matches). Thus, we manually crafted additional test cases to ensure that each sample has at least 13 matching1 and 12 non-matching string examples. After creating these additional test strings, we evaluated the regex with the new set of test cases again and excluded the failed regex samples. Hence, we have 762 samples in our final dataset. Upon further inspection of the descriptions in the extracted sample, we observed that some of them lacked a more detailed explanation (e.g., ID#84: “SQL date format tester.”) or had extra information unrelated to the regex (e.g., ID#4: “... Other than that, this is just a really really long description of a regular expression that I’m using to test how my front page will look in the case where very long expression descriptions are used”). Thus, we created a refined prompt with a clear description of the regex that includes three match and two non-match string examples. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @inproceedings{siddiq2024regexeval, author={Siddiq, Mohammed Latif and Zhang, Jiahao and Roney, Lindsay and Santos, Joanna C. S.}, booktitle={Proceedings of the 46th International Conference on Software Engineering, NIER Track (ICSE-NIER '24)}, title={Re(gEx|DoS)Eval: Evaluating Generated Regular Expressions and their Proneness to DoS Attacks}, year={2024} } ``` ## Dataset Card Authors and Contact [Mohammed Latif Siddiq](http://lsiddiqsunny.github.io)
s2e-lab/RegexEval
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:mit", "regex", "redos", "security", "region:us" ]
2023-11-24T06:41:50+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "RegexEval", "tags": ["regex", "redos", "security"]}
2023-11-29T01:10:58+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-n<1K #language-English #license-mit #regex #redos #security #region-us
# Dataset Card for RegexEval Re(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests. ## Dataset Details ### Dataset Description - Curated by: Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos - Language(s): English ### Dataset Sources - Repository: URL - Paper: URL ## Dataset Structure - URL: dataset file in jsonl format. Every line contains a JSON object with the following fields: - 'id': unique identifier of the sample. - 'raw_prompt': Raw/original prompt from the real users with the description of the RegEx. - 'refined_prompt': Refined prompt with the description of the RegEx. - 'matches': Matches examples for the RegEx. - 'non-matches': Non-matches examples for the RegEx. ## Dataset Creation ### Source Data We mined (on Aug. 16th, 2023) all the regexes from RegExLib, a regular expression library. We use this library because it contains user-contributed regular expressions. We obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings. #### Data Collection and Processing For each sample previously collected, we perform a manual validation to (1) filter out incorrect regexes, (2) create more sample test cases (i.e., matching and non-matching string examples), and (3) create refined problem descriptions (i.e., prompts). We excluded any regex that matched one or more of the following conditions: (i) it was missing any metadata, i.e., description and/or list of expected matches and non- matches; (ii) its description is not written in English; (iii) its description included vulgar words; (iv) its description does not provide sufficient information to understand the purpose of the regular expression; (v) it aimed to detect just one word; (vi) it is incorrect (i.e., the regex matches a string that is not supposed to match, or it does not match a string that is expected to match). After this step, we have 1,001 regex samples. Each collected regex sample had (on average) only 4 string examples (2 that are expected matches and 2 that are expected non-matches). Thus, we manually crafted additional test cases to ensure that each sample has at least 13 matching1 and 12 non-matching string examples. After creating these additional test strings, we evaluated the regex with the new set of test cases again and excluded the failed regex samples. Hence, we have 762 samples in our final dataset. Upon further inspection of the descriptions in the extracted sample, we observed that some of them lacked a more detailed explanation (e.g., ID#84: “SQL date format tester.”) or had extra information unrelated to the regex (e.g., ID#4: “... Other than that, this is just a really really long description of a regular expression that I’m using to test how my front page will look in the case where very long expression descriptions are used”). Thus, we created a refined prompt with a clear description of the regex that includes three match and two non-match string examples. BibTeX: ## Dataset Card Authors and Contact Mohammed Latif Siddiq
[ "# Dataset Card for RegexEval\n\n\n\nRe(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests.", "## Dataset Details", "### Dataset Description\n\n- Curated by: Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos\n- Language(s): English", "### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: URL", "## Dataset Structure\n\n\n\n- URL: dataset file in jsonl format. Every line contains a JSON object with the following fields:\n - 'id': unique identifier of the sample.\n - 'raw_prompt': Raw/original prompt from the real users with the description of the RegEx.\n - 'refined_prompt': Refined prompt with the description of the RegEx.\n - 'matches': Matches examples for the RegEx.\n - 'non-matches': Non-matches examples for the RegEx.", "## Dataset Creation", "### Source Data\n\nWe mined (on Aug. 16th, 2023) all the regexes from RegExLib, a regular expression library. We use this library because it contains user-contributed regular expressions. \nWe obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings.", "#### Data Collection and Processing\n\nFor each sample previously collected, we perform a manual validation to (1) filter out incorrect regexes, (2) create more sample test cases (i.e., matching and non-matching string examples), and (3) create refined problem descriptions (i.e., prompts).\nWe excluded any regex that matched one or more of the following conditions: (i) it was missing any metadata, i.e., description and/or list of expected matches and non- matches; (ii) its description is not written in English; (iii) its description included vulgar words; (iv) its description does not provide sufficient information to understand the purpose of the regular expression; (v) it aimed to detect just one word; (vi) it is incorrect (i.e., the regex matches a string that is not supposed to match, or it does not match a string that is expected to match). After this step, we have 1,001 regex samples.\n\nEach collected regex sample had (on average) only 4 string examples (2 that are expected matches and 2 that are expected non-matches). Thus, we manually crafted additional test cases to ensure that each sample has at least 13 matching1 and 12 non-matching string examples. After creating these additional test strings, we evaluated the regex with the new set of test cases again and excluded the failed regex samples. Hence, we have 762 samples in our final dataset.\n\nUpon further inspection of the descriptions in the extracted sample, we observed that some of them lacked a more detailed explanation (e.g., ID#84: “SQL date format tester.”) or had extra information unrelated to the regex (e.g., ID#4: “... Other than that, this is just a really really long description of a regular expression that I’m using to test how my front page will look in the case where very long expression descriptions are used”). Thus, we created a refined prompt with a clear description of the regex that includes three match and two non-match string examples.\n\n\nBibTeX:", "## Dataset Card Authors and Contact\n\nMohammed Latif Siddiq" ]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-mit #regex #redos #security #region-us \n", "# Dataset Card for RegexEval\n\n\n\nRe(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests.", "## Dataset Details", "### Dataset Description\n\n- Curated by: Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos\n- Language(s): English", "### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: URL", "## Dataset Structure\n\n\n\n- URL: dataset file in jsonl format. Every line contains a JSON object with the following fields:\n - 'id': unique identifier of the sample.\n - 'raw_prompt': Raw/original prompt from the real users with the description of the RegEx.\n - 'refined_prompt': Refined prompt with the description of the RegEx.\n - 'matches': Matches examples for the RegEx.\n - 'non-matches': Non-matches examples for the RegEx.", "## Dataset Creation", "### Source Data\n\nWe mined (on Aug. 16th, 2023) all the regexes from RegExLib, a regular expression library. We use this library because it contains user-contributed regular expressions. \nWe obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings.", "#### Data Collection and Processing\n\nFor each sample previously collected, we perform a manual validation to (1) filter out incorrect regexes, (2) create more sample test cases (i.e., matching and non-matching string examples), and (3) create refined problem descriptions (i.e., prompts).\nWe excluded any regex that matched one or more of the following conditions: (i) it was missing any metadata, i.e., description and/or list of expected matches and non- matches; (ii) its description is not written in English; (iii) its description included vulgar words; (iv) its description does not provide sufficient information to understand the purpose of the regular expression; (v) it aimed to detect just one word; (vi) it is incorrect (i.e., the regex matches a string that is not supposed to match, or it does not match a string that is expected to match). After this step, we have 1,001 regex samples.\n\nEach collected regex sample had (on average) only 4 string examples (2 that are expected matches and 2 that are expected non-matches). Thus, we manually crafted additional test cases to ensure that each sample has at least 13 matching1 and 12 non-matching string examples. After creating these additional test strings, we evaluated the regex with the new set of test cases again and excluded the failed regex samples. Hence, we have 762 samples in our final dataset.\n\nUpon further inspection of the descriptions in the extracted sample, we observed that some of them lacked a more detailed explanation (e.g., ID#84: “SQL date format tester.”) or had extra information unrelated to the regex (e.g., ID#4: “... Other than that, this is just a really really long description of a regular expression that I’m using to test how my front page will look in the case where very long expression descriptions are used”). Thus, we created a refined prompt with a clear description of the regex that includes three match and two non-match string examples.\n\n\nBibTeX:", "## Dataset Card Authors and Contact\n\nMohammed Latif Siddiq" ]
[ 44, 59, 4, 41, 16, 127, 5, 85, 478, 13 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-mit #regex #redos #security #region-us \n# Dataset Card for RegexEval\n\n\n\nRe(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests.## Dataset Details### Dataset Description\n\n- Curated by: Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos\n- Language(s): English### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: URL## Dataset Structure\n\n\n\n- URL: dataset file in jsonl format. Every line contains a JSON object with the following fields:\n - 'id': unique identifier of the sample.\n - 'raw_prompt': Raw/original prompt from the real users with the description of the RegEx.\n - 'refined_prompt': Refined prompt with the description of the RegEx.\n - 'matches': Matches examples for the RegEx.\n - 'non-matches': Non-matches examples for the RegEx.## Dataset Creation### Source Data\n\nWe mined (on Aug. 16th, 2023) all the regexes from RegExLib, a regular expression library. We use this library because it contains user-contributed regular expressions. \nWe obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings." ]
ce853de80a11ea18242b9898e2d02ec19dd66896
### 데이터 출처 : https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=86 해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터
youngwoo3283/df_sentiment_chat
[ "language:ko", "region:us" ]
2023-11-24T07:05:40+00:00
{"language": ["ko"]}
2023-11-24T07:12:36+00:00
[]
[ "ko" ]
TAGS #language-Korean #region-us
### 데이터 출처 : URL 해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터
[ "### 데이터 출처 : URL\n\n해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터" ]
[ "TAGS\n#language-Korean #region-us \n", "### 데이터 출처 : URL\n\n해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터" ]
[ 11, 22 ]
[ "passage: TAGS\n#language-Korean #region-us \n### 데이터 출처 : URL\n\n해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터" ]
1edfbd4f63a336ad3d65be2b74b0a73640675942
# Dataset Card for LFANIME A dataset of anime frames collected by KaraKaraWitch. ## Dataset Details ### Dataset Description LFANIME, or Low-Framerate Anime, comprises frames from Japanese animation. The dataset serves dual purposes—facilitating fine-tuning of image diffusion models and functioning as a pre-training resource. Moreover, we anticipate its utilization in image classification. Important Note: LFAnime is not intended for watching anime. To discourage this application, we have intentionally lowered the frame rate and excluded audio from the dataset. - **Curated by:** KaraKaraWitch - **Funded by [optional]:** N/A - **Shared by [optional]:** N/A - **Language(s) (NLP):** Nil. Primarily japanese, but no audio is included. - **License:** CC ## Uses A tar file compresses each "Episode," encompassing sequential anime frames. The dataset also incorporates chapters for episodes that have them. It's important to note that certain frame numbers may be absent intentionally. ### Direct Use <!-- This section describes suitable use cases for the dataset. --> We release this dataset for free in the hopes that it could be used for text to image generation and/or image classification. ### Out-of-Scope Use Technically speaking, this dataset could be used to watch anime. However we do not recommend as such. Additionally there could be unforseen usage that the author does not intend. <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Each tar file should generally follow this format `LFAnime-[T(Test),A(Alpha),B(Beta),R(Release)]-[Sequential Index]-[AnilistID]-[Episode]` Each tar file should contain: ``` frame_[XXXX]_[detection_type]_[seconds (float)].jpg kframes.log (scxvid keyframe log) metadata.json (Selected frames + Detection metrics + Mode) ``` `detection_type` can be one of the following: ``` - key (KeyFrame) - p_key (Previous Frame from Key Frame) - inter (Inter frame) ``` ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> The emphasis has been on developing models for generating images from text, particularly in the realm of creating "anime"-style visuals. Examples of such models include Waifu Diffusion and NovelAI's SD 1.x models. Regrettably, these models tend to converge, resulting in a consistent aesthetic. While this aesthetic may appeal to many users, it poses a challenge when attempting to diverge from or fine-tune the ingrained visual style of most SD 1.x models. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> We've opted not to reveal the specific origins of the anime to establish a level of separation between the producers and this dataset. Nevertheless, we can outline the processing steps as follows: 1. Extract frames from the mkv file, sampling every 10 frames per second. 2. Utilize scxvid to generate a timecode for identifying scene cuts. 3. Exclude frames that precede or follow a scene cut (considering potential inclusion of 1/2 frames at each scene cut). 4. Save the processed frames to a tar file. #### Who are the source data producers? We have decided not to disclose the exact sources. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> As this dataset is a personal collection from KaraKaraWitch, it will have tendencies to generally not "Shonen" anime and will have female protagonists in general. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. ## Citation [optional] ``` @misc{lfanime, title = {LFAnime: A Low Framerate anime dataset.}, author = {KaraKaraWitch}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/LFANIME}}, } ``` ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> Anime: > Anime (Japanese: アニメ, IPA: [aꜜɲime]) is hand-drawn and computer-generated animation originating from Japan. Outside Japan and in English, anime refers specifically to animation produced in Japan.[1] However, in Japan and in Japanese, anime (a term derived from a shortening of the English word animation) describes all animated works, regardless of style or origin. Many works of animation with a similar style to Japanese animation are also produced outside Japan. Video games sometimes also feature themes and artstyles that can be considered as "anime". > - Wikipedia ### Contributions - [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset. - [ChatGPT](https://chat.openai.com) rewording sentences in this datacard.
RyokoExtra/LFANIME
[ "task_categories:image-classification", "task_categories:text-to-image", "license:cc", "art", "anime", "region:us" ]
2023-11-24T07:17:44+00:00
{"license": "cc", "task_categories": ["image-classification", "text-to-image"], "pretty_name": "LFAnime", "tags": ["art", "anime"]}
2023-12-29T03:28:18+00:00
[]
[]
TAGS #task_categories-image-classification #task_categories-text-to-image #license-cc #art #anime #region-us
# Dataset Card for LFANIME A dataset of anime frames collected by KaraKaraWitch. ## Dataset Details ### Dataset Description LFANIME, or Low-Framerate Anime, comprises frames from Japanese animation. The dataset serves dual purposes—facilitating fine-tuning of image diffusion models and functioning as a pre-training resource. Moreover, we anticipate its utilization in image classification. Important Note: LFAnime is not intended for watching anime. To discourage this application, we have intentionally lowered the frame rate and excluded audio from the dataset. - Curated by: KaraKaraWitch - Funded by [optional]: N/A - Shared by [optional]: N/A - Language(s) (NLP): Nil. Primarily japanese, but no audio is included. - License: CC ## Uses A tar file compresses each "Episode," encompassing sequential anime frames. The dataset also incorporates chapters for episodes that have them. It's important to note that certain frame numbers may be absent intentionally. ### Direct Use We release this dataset for free in the hopes that it could be used for text to image generation and/or image classification. ### Out-of-Scope Use Technically speaking, this dataset could be used to watch anime. However we do not recommend as such. Additionally there could be unforseen usage that the author does not intend. ## Dataset Structure Each tar file should generally follow this format 'LFAnime-[T(Test),A(Alpha),B(Beta),R(Release)]-[Sequential Index]-[AnilistID]-[Episode]' Each tar file should contain: 'detection_type' can be one of the following: ## Dataset Creation ### Curation Rationale The emphasis has been on developing models for generating images from text, particularly in the realm of creating "anime"-style visuals. Examples of such models include Waifu Diffusion and NovelAI's SD 1.x models. Regrettably, these models tend to converge, resulting in a consistent aesthetic. While this aesthetic may appeal to many users, it poses a challenge when attempting to diverge from or fine-tune the ingrained visual style of most SD 1.x models. ### Source Data #### Data Collection and Processing We've opted not to reveal the specific origins of the anime to establish a level of separation between the producers and this dataset. Nevertheless, we can outline the processing steps as follows: 1. Extract frames from the mkv file, sampling every 10 frames per second. 2. Utilize scxvid to generate a timecode for identifying scene cuts. 3. Exclude frames that precede or follow a scene cut (considering potential inclusion of 1/2 frames at each scene cut). 4. Save the processed frames to a tar file. #### Who are the source data producers? We have decided not to disclose the exact sources. ## Bias, Risks, and Limitations As this dataset is a personal collection from KaraKaraWitch, it will have tendencies to generally not "Shonen" anime and will have female protagonists in general. ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. [optional] ## Glossary [optional] Anime: > Anime (Japanese: アニメ, IPA: [aꜜɲime]) is hand-drawn and computer-generated animation originating from Japan. Outside Japan and in English, anime refers specifically to animation produced in Japan.[1] However, in Japan and in Japanese, anime (a term derived from a shortening of the English word animation) describes all animated works, regardless of style or origin. Many works of animation with a similar style to Japanese animation are also produced outside Japan. Video games sometimes also feature themes and artstyles that can be considered as "anime". > - Wikipedia ### Contributions - @KaraKaraWitch (Twitter) for gathering this dataset. - ChatGPT rewording sentences in this datacard.
[ "# Dataset Card for LFANIME\n\nA dataset of anime frames collected by KaraKaraWitch.", "## Dataset Details", "### Dataset Description\n\nLFANIME, or Low-Framerate Anime, comprises frames from Japanese animation. The dataset serves dual purposes—facilitating fine-tuning of image diffusion models and functioning as a pre-training resource. Moreover, we anticipate its utilization in image classification.\n\nImportant Note: LFAnime is not intended for watching anime. To discourage this application, we have intentionally lowered the frame rate and excluded audio from the dataset.\n\n- Curated by: KaraKaraWitch\n- Funded by [optional]: N/A\n- Shared by [optional]: N/A\n- Language(s) (NLP): Nil. Primarily japanese, but no audio is included.\n- License: CC", "## Uses\n\nA tar file compresses each \"Episode,\" encompassing sequential anime frames. The dataset also incorporates chapters for episodes that have them. It's important to note that certain frame numbers may be absent intentionally.", "### Direct Use\n\n\n\nWe release this dataset for free in the hopes that it could be used for text to image generation and/or image classification.", "### Out-of-Scope Use\n\nTechnically speaking, this dataset could be used to watch anime. However we do not recommend as such. \nAdditionally there could be unforseen usage that the author does not intend.", "## Dataset Structure\n\n\n\nEach tar file should generally follow this format 'LFAnime-[T(Test),A(Alpha),B(Beta),R(Release)]-[Sequential Index]-[AnilistID]-[Episode]'\n\nEach tar file should contain:\n\n\n'detection_type' can be one of the following:", "## Dataset Creation", "### Curation Rationale\n\n\n\nThe emphasis has been on developing models for generating images from text, particularly in the realm of creating \"anime\"-style visuals. \nExamples of such models include Waifu Diffusion and NovelAI's SD 1.x models. Regrettably, these models tend to converge, resulting in a consistent aesthetic. \nWhile this aesthetic may appeal to many users, it poses a challenge when attempting to diverge from or fine-tune the ingrained visual style of most SD 1.x models.", "### Source Data", "#### Data Collection and Processing\n\n\n\nWe've opted not to reveal the specific origins of the anime to establish a level of separation between the producers and this dataset. \nNevertheless, we can outline the processing steps as follows:\n\n1. Extract frames from the mkv file, sampling every 10 frames per second.\n2. Utilize scxvid to generate a timecode for identifying scene cuts.\n3. Exclude frames that precede or follow a scene cut (considering potential inclusion of 1/2 frames at each scene cut).\n4. Save the processed frames to a tar file.", "#### Who are the source data producers?\n\nWe have decided not to disclose the exact sources.", "## Bias, Risks, and Limitations\n\n\n\nAs this dataset is a personal collection from KaraKaraWitch, it will have tendencies to generally not \"Shonen\" anime and will have female protagonists in general.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset.\n\n[optional]", "## Glossary [optional]\n\n\n\nAnime:\n\n> Anime (Japanese: アニメ, IPA: [aꜜɲime]) is hand-drawn and computer-generated animation originating from Japan. Outside Japan and in English, anime refers specifically to animation produced in Japan.[1] However, in Japan and in Japanese, anime (a term derived from a shortening of the English word animation) describes all animated works, regardless of style or origin. Many works of animation with a similar style to Japanese animation are also produced outside Japan. Video games sometimes also feature themes and artstyles that can be considered as \"anime\". \n> - Wikipedia", "### Contributions\n\n- @KaraKaraWitch (Twitter) for gathering this dataset.\n- ChatGPT rewording sentences in this datacard." ]
[ "TAGS\n#task_categories-image-classification #task_categories-text-to-image #license-cc #art #anime #region-us \n", "# Dataset Card for LFANIME\n\nA dataset of anime frames collected by KaraKaraWitch.", "## Dataset Details", "### Dataset Description\n\nLFANIME, or Low-Framerate Anime, comprises frames from Japanese animation. The dataset serves dual purposes—facilitating fine-tuning of image diffusion models and functioning as a pre-training resource. Moreover, we anticipate its utilization in image classification.\n\nImportant Note: LFAnime is not intended for watching anime. To discourage this application, we have intentionally lowered the frame rate and excluded audio from the dataset.\n\n- Curated by: KaraKaraWitch\n- Funded by [optional]: N/A\n- Shared by [optional]: N/A\n- Language(s) (NLP): Nil. Primarily japanese, but no audio is included.\n- License: CC", "## Uses\n\nA tar file compresses each \"Episode,\" encompassing sequential anime frames. The dataset also incorporates chapters for episodes that have them. It's important to note that certain frame numbers may be absent intentionally.", "### Direct Use\n\n\n\nWe release this dataset for free in the hopes that it could be used for text to image generation and/or image classification.", "### Out-of-Scope Use\n\nTechnically speaking, this dataset could be used to watch anime. However we do not recommend as such. \nAdditionally there could be unforseen usage that the author does not intend.", "## Dataset Structure\n\n\n\nEach tar file should generally follow this format 'LFAnime-[T(Test),A(Alpha),B(Beta),R(Release)]-[Sequential Index]-[AnilistID]-[Episode]'\n\nEach tar file should contain:\n\n\n'detection_type' can be one of the following:", "## Dataset Creation", "### Curation Rationale\n\n\n\nThe emphasis has been on developing models for generating images from text, particularly in the realm of creating \"anime\"-style visuals. \nExamples of such models include Waifu Diffusion and NovelAI's SD 1.x models. Regrettably, these models tend to converge, resulting in a consistent aesthetic. \nWhile this aesthetic may appeal to many users, it poses a challenge when attempting to diverge from or fine-tune the ingrained visual style of most SD 1.x models.", "### Source Data", "#### Data Collection and Processing\n\n\n\nWe've opted not to reveal the specific origins of the anime to establish a level of separation between the producers and this dataset. \nNevertheless, we can outline the processing steps as follows:\n\n1. Extract frames from the mkv file, sampling every 10 frames per second.\n2. Utilize scxvid to generate a timecode for identifying scene cuts.\n3. Exclude frames that precede or follow a scene cut (considering potential inclusion of 1/2 frames at each scene cut).\n4. Save the processed frames to a tar file.", "#### Who are the source data producers?\n\nWe have decided not to disclose the exact sources.", "## Bias, Risks, and Limitations\n\n\n\nAs this dataset is a personal collection from KaraKaraWitch, it will have tendencies to generally not \"Shonen\" anime and will have female protagonists in general.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset.\n\n[optional]", "## Glossary [optional]\n\n\n\nAnime:\n\n> Anime (Japanese: アニメ, IPA: [aꜜɲime]) is hand-drawn and computer-generated animation originating from Japan. Outside Japan and in English, anime refers specifically to animation produced in Japan.[1] However, in Japan and in Japanese, anime (a term derived from a shortening of the English word animation) describes all animated works, regardless of style or origin. Many works of animation with a similar style to Japanese animation are also produced outside Japan. Video games sometimes also feature themes and artstyles that can be considered as \"anime\". \n> - Wikipedia", "### Contributions\n\n- @KaraKaraWitch (Twitter) for gathering this dataset.\n- ChatGPT rewording sentences in this datacard." ]
[ 39, 24, 4, 170, 57, 32, 49, 82, 5, 117, 4, 134, 21, 47, 32, 146, 37 ]
[ "passage: TAGS\n#task_categories-image-classification #task_categories-text-to-image #license-cc #art #anime #region-us \n# Dataset Card for LFANIME\n\nA dataset of anime frames collected by KaraKaraWitch.## Dataset Details### Dataset Description\n\nLFANIME, or Low-Framerate Anime, comprises frames from Japanese animation. The dataset serves dual purposes—facilitating fine-tuning of image diffusion models and functioning as a pre-training resource. Moreover, we anticipate its utilization in image classification.\n\nImportant Note: LFAnime is not intended for watching anime. To discourage this application, we have intentionally lowered the frame rate and excluded audio from the dataset.\n\n- Curated by: KaraKaraWitch\n- Funded by [optional]: N/A\n- Shared by [optional]: N/A\n- Language(s) (NLP): Nil. Primarily japanese, but no audio is included.\n- License: CC## Uses\n\nA tar file compresses each \"Episode,\" encompassing sequential anime frames. The dataset also incorporates chapters for episodes that have them. It's important to note that certain frame numbers may be absent intentionally.### Direct Use\n\n\n\nWe release this dataset for free in the hopes that it could be used for text to image generation and/or image classification.### Out-of-Scope Use\n\nTechnically speaking, this dataset could be used to watch anime. However we do not recommend as such. \nAdditionally there could be unforseen usage that the author does not intend.## Dataset Structure\n\n\n\nEach tar file should generally follow this format 'LFAnime-[T(Test),A(Alpha),B(Beta),R(Release)]-[Sequential Index]-[AnilistID]-[Episode]'\n\nEach tar file should contain:\n\n\n'detection_type' can be one of the following:## Dataset Creation" ]
2d1453147eb2ae4c5026101efd0ce306894cac6e
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
ciscak/networks-test1
[ "license:mit", "region:us" ]
2023-11-24T07:26:53+00:00
{"license": "mit"}
2023-11-24T07:30:00+00:00
[]
[]
TAGS #license-mit #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#license-mit #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 11, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#license-mit #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
8e3adaeeef9bcb4edf2f120c6c7d73a52f06b8f6
# mtkinit/small_sentiment_dataset Created from AIOD platform
mtkinit/mtkinit_small_sentiment_dataset
[ "region:us" ]
2023-11-24T07:28:14+00:00
{"pretty_name": "mtkinit/small_sentiment_dataset"}
2023-11-24T07:28:15+00:00
[]
[]
TAGS #region-us
# mtkinit/small_sentiment_dataset Created from AIOD platform
[ "# mtkinit/small_sentiment_dataset\nCreated from AIOD platform" ]
[ "TAGS\n#region-us \n", "# mtkinit/small_sentiment_dataset\nCreated from AIOD platform" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# mtkinit/small_sentiment_dataset\nCreated from AIOD platform" ]
dfafc62f29a036db6d8ef2191702c74926a69ca4
# mtkinit/tcb_small_sentiment_dataset Created from AIOD platform
mtkinit/mtkinit_tcb_small_sentiment_dataset
[ "region:us" ]
2023-11-24T08:09:56+00:00
{"pretty_name": "mtkinit/tcb_small_sentiment_dataset"}
2023-11-24T08:09:57+00:00
[]
[]
TAGS #region-us
# mtkinit/tcb_small_sentiment_dataset Created from AIOD platform
[ "# mtkinit/tcb_small_sentiment_dataset\nCreated from AIOD platform" ]
[ "TAGS\n#region-us \n", "# mtkinit/tcb_small_sentiment_dataset\nCreated from AIOD platform" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# mtkinit/tcb_small_sentiment_dataset\nCreated from AIOD platform" ]
a712a35fcb4d9c0fd3f1cf09ea30539b931c3174
1500 samples forom the train split.
Singhoo/gsm8k_dev
[ "region:us" ]
2023-11-24T08:17:14+00:00
{}
2023-11-24T08:17:59+00:00
[]
[]
TAGS #region-us
1500 samples forom the train split.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
1a41ab2d8f050eec4a36174d31cfbe9d9e56d885
## Dataset Description Repository: https://github.com/verimsu/STSb-TR ### Dataset Summary STSb-TR dataset is the machine translated version of English STS benchmark dataset using Google Cloud Translation API. ### Citation ``` @inproceedings{beken-fikri-etal-2021-semantic, title = "Semantic Similarity Based Evaluation for Abstractive News Summarization", author = "Beken Fikri, Figen and Oflazer, Kemal and Yanikoglu, Berrin", booktitle = "Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.gem-1.3", doi = "10.18653/v1/2021.gem-1.3", pages = "24--33", abstract = "ROUGE is a widely used evaluation metric in text summarization. However, it is not suitable for the evaluation of abstractive summarization systems as it relies on lexical overlap between the gold standard and the generated summaries. This limitation becomes more apparent for agglutinative languages with very large vocabularies and high type/token ratios. In this paper, we present semantic similarity models for Turkish and apply them as evaluation metrics for an abstractive summarization task. To achieve this, we translated the English STSb dataset into Turkish and presented the first semantic textual similarity dataset for Turkish as well. We showed that our best similarity models have better alignment with average human judgments compared to ROUGE in both Pearson and Spearman correlations.", } ```
figenfikri/stsb_tr
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-sts-b", "language:tr", "region:us" ]
2023-11-24T08:26:52+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["tr"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|other-sts-b"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring"], "pretty_name": "Semantic Textual Similarity in Turkish"}
2023-11-24T08:28:16+00:00
[]
[ "tr" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-sts-b #language-Turkish #region-us
## Dataset Description Repository: URL ### Dataset Summary STSb-TR dataset is the machine translated version of English STS benchmark dataset using Google Cloud Translation API.
[ "## Dataset Description\n\nRepository: URL", "### Dataset Summary\n\nSTSb-TR dataset is the machine translated version of English STS benchmark dataset using Google Cloud Translation API." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-sts-b #language-Turkish #region-us \n", "## Dataset Description\n\nRepository: URL", "### Dataset Summary\n\nSTSb-TR dataset is the machine translated version of English STS benchmark dataset using Google Cloud Translation API." ]
[ 109, 9, 33 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-sts-b #language-Turkish #region-us \n## Dataset Description\n\nRepository: URL### Dataset Summary\n\nSTSb-TR dataset is the machine translated version of English STS benchmark dataset using Google Cloud Translation API." ]
f81a9231174d6cdfc52e603a9a32898427d6e13e
# Dataset Card for "contracts_v6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
paul-w-qs/contracts_v6
[ "region:us" ]
2023-11-24T09:29:43+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "N_ROWS", "dtype": "int64"}, {"name": "N_COLS", "dtype": "int64"}, {"name": "FONT_SIZE", "dtype": "int64"}, {"name": "FONT_NAME", "dtype": "string"}, {"name": "BORDER_THICKNESS", "dtype": "int64"}, {"name": "TABLE_STYLE", "dtype": "string"}, {"name": "NOISED", "dtype": "bool"}, {"name": "LABEL_NOISE", "dtype": "bool"}, {"name": "JSON_LABEL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 360922904.016, "num_examples": 5364}], "download_size": 360853881, "dataset_size": 360922904.016}}
2023-11-24T09:32:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "contracts_v6" More Information needed
[ "# Dataset Card for \"contracts_v6\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"contracts_v6\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"contracts_v6\"\n\nMore Information needed" ]
a5200e67ac1b7d90d97c1c7d8f0a9eb55c81bd8a
# Dataset Card for "gtzan" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
confit/gtzan
[ "region:us" ]
2023-11-24T09:32:29+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "blues", "1": "classical", "2": "country", "3": "disco", "4": "hiphop", "5": "jazz", "6": "metal", "7": "pop", "8": "reggae", "9": "rock"}}}}], "splits": [{"name": "train", "num_bytes": 17683, "num_examples": 443}, {"name": "validation", "num_bytes": 7871, "num_examples": 197}, {"name": "test", "num_bytes": 11546, "num_examples": 290}], "download_size": 10908, "dataset_size": 37100}}
2023-11-24T10:20:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gtzan" More Information needed
[ "# Dataset Card for \"gtzan\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gtzan\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gtzan\"\n\nMore Information needed" ]
6a1f6261048591f4f730343195ef485d18859656
# mtkinit/TCB-sentiment-dataset Created from AIOD platform
mtkinit/mtkinit_TCB_sentiment_dataset
[ "region:us" ]
2023-11-24T09:44:24+00:00
{"pretty_name": "mtkinit/TCB-sentiment-dataset"}
2023-11-24T09:44:25+00:00
[]
[]
TAGS #region-us
# mtkinit/TCB-sentiment-dataset Created from AIOD platform
[ "# mtkinit/TCB-sentiment-dataset\nCreated from AIOD platform" ]
[ "TAGS\n#region-us \n", "# mtkinit/TCB-sentiment-dataset\nCreated from AIOD platform" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# mtkinit/TCB-sentiment-dataset\nCreated from AIOD platform" ]
e820f5f3088d61a127b3c3558bdd7e0fb3c495a2
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description Speeches from the Norwegian parliament from 1998 and 2022. Parsed from the Norwegian part of the EU ParlaMint, ParlaMint-NO ### Dataset Sources Source: https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-77/
trymtv/norwegian-parliament-speeches
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:no", "license:cc0-1.0", "region:us" ]
2023-11-24T10:06:57+00:00
{"language": ["no"], "license": "cc0-1.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "Norwegian parliament speeches"}
2023-11-24T10:14:58+00:00
[]
[ "no" ]
TAGS #task_categories-text-classification #size_categories-100K<n<1M #language-Norwegian #license-cc0-1.0 #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description Speeches from the Norwegian parliament from 1998 and 2022. Parsed from the Norwegian part of the EU ParlaMint, ParlaMint-NO ### Dataset Sources Source: URL
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\nSpeeches from the Norwegian parliament from 1998 and 2022. Parsed from the Norwegian part of the EU ParlaMint, ParlaMint-NO", "### Dataset Sources\n\nSource: URL" ]
[ "TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Norwegian #license-cc0-1.0 #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\nSpeeches from the Norwegian parliament from 1998 and 2022. Parsed from the Norwegian part of the EU ParlaMint, ParlaMint-NO", "### Dataset Sources\n\nSource: URL" ]
[ 43, 8, 4, 38, 9 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Norwegian #license-cc0-1.0 #region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\nSpeeches from the Norwegian parliament from 1998 and 2022. Parsed from the Norwegian part of the EU ParlaMint, ParlaMint-NO### Dataset Sources\n\nSource: URL" ]
b20c77c44ba3cc07a6b563c750401f4ae38f569c
RGB-D dataset for instance segmentation (from RGB or depth) and pose estimation of individual objects. Data has been generated by randomizing bin contents in Webots. Each instance contains a mask image as well meta data containing labels, position, and size of each object. <video src='https://cdn-uploads.huggingface.co/production/uploads/655b1b359d249b4ab388d4a2/l6b76ezxkPi6lG3Fr6_kj.mp4' width=720/> You can create your own data by opening webots_grasp.wbt in the world directory using [Webots](https://www.cyberbotics.com).
correll/semanticsegmentationandposeestimationfromrgbd
[ "task_categories:image-segmentation", "task_categories:image-classification", "task_categories:object-detection", "license:mit", "region:us" ]
2023-11-24T10:37:39+00:00
{"license": "mit", "task_categories": ["image-segmentation", "image-classification", "object-detection"], "pretty_name": "Semantic segmenation and pose estimation from RGB-D", "dataset_info": {"features": [{"name": "rgb", "dtype": "image"}, {"name": "depth", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "meta", "list": [{"name": "colors", "sequence": "float64"}, {"name": "file", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "model", "dtype": "string"}, {"name": "numberOfColors", "dtype": "int64"}, {"name": "orientation", "sequence": "float64"}, {"name": "position", "sequence": "float64"}, {"name": "positionOnImage", "sequence": "int64"}, {"name": "size", "sequence": "float64"}, {"name": "sizeOnImage", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 3340733260.96, "num_examples": 1106}], "download_size": 3319212411, "dataset_size": 3340733260.96}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-14T15:43:51+00:00
[]
[]
TAGS #task_categories-image-segmentation #task_categories-image-classification #task_categories-object-detection #license-mit #region-us
RGB-D dataset for instance segmentation (from RGB or depth) and pose estimation of individual objects. Data has been generated by randomizing bin contents in Webots. Each instance contains a mask image as well meta data containing labels, position, and size of each object. <video src='URL width=720/> You can create your own data by opening webots_grasp.wbt in the world directory using Webots.
[]
[ "TAGS\n#task_categories-image-segmentation #task_categories-image-classification #task_categories-object-detection #license-mit #region-us \n" ]
[ 45 ]
[ "passage: TAGS\n#task_categories-image-segmentation #task_categories-image-classification #task_categories-object-detection #license-mit #region-us \n" ]
13e33f9d04621f1c0cb3d5181e71dd6479793291
# Dataset Card for "maow" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bilallllllll/maow
[ "region:us" ]
2023-11-24T10:42:09+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "previous_frame", "dtype": "image"}, {"name": "next_frame", "dtype": "image"}, {"name": "next_frame_pose", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2492565.0, "num_examples": 12}], "download_size": 2473017, "dataset_size": 2492565.0}}
2023-11-24T10:42:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "maow" More Information needed
[ "# Dataset Card for \"maow\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"maow\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"maow\"\n\nMore Information needed" ]
e24160f5f4ce2658ac6cd8dec0e8eaf70f11cf54
This Dataset is created from processing the files from this GitHub repository : [PlantDoc-Object-Detection-Dataset](https://github.com/pratikkayal/PlantDoc-Object-Detection-Dataset/tree/master) Citation BibTeX: ``` @inproceedings{10.1145/3371158.3371196, author = {Singh, Davinder and Jain, Naman and Jain, Pranjali and Kayal, Pratik and Kumawat, Sudhakar and Batra, Nipun}, title = {PlantDoc: A Dataset for Visual Plant Disease Detection}, year = {2020}, isbn = {9781450377386}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3371158.3371196}, doi = {10.1145/3371158.3371196}, booktitle = {Proceedings of the 7th ACM IKDD CoDS and 25th COMAD}, pages = {249–253}, numpages = {5}, keywords = {Deep Learning, Object Detection, Image Classification}, location = {Hyderabad, India}, series = {CoDS COMAD 2020} } ```
susnato/plant_disease_detection_processed
[ "task_categories:object-detection", "license:cc-by-4.0", "region:us" ]
2023-11-24T10:43:53+00:00
{"license": "cc-by-4.0", "task_categories": ["object-detection"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "area", "sequence": "int64"}, {"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "category", "sequence": "int64"}]}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "pixel_mask", "sequence": {"sequence": "int64"}}, {"name": "labels", "struct": [{"name": "area", "sequence": "float32"}, {"name": "boxes", "sequence": {"sequence": "float32"}}, {"name": "class_labels", "sequence": "int64"}, {"name": "image_id", "sequence": "int64"}, {"name": "iscrowd", "sequence": "int64"}, {"name": "orig_size", "sequence": "int64"}, {"name": "size", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 27853534555.06, "num_examples": 2110}, {"name": "test", "num_bytes": 2810816579.0, "num_examples": 214}], "download_size": 5331925364, "dataset_size": 30664351134.06}}
2023-11-25T12:05:23+00:00
[]
[]
TAGS #task_categories-object-detection #license-cc-by-4.0 #region-us
This Dataset is created from processing the files from this GitHub repository : PlantDoc-Object-Detection-Dataset Citation BibTeX:
[]
[ "TAGS\n#task_categories-object-detection #license-cc-by-4.0 #region-us \n" ]
[ 26 ]
[ "passage: TAGS\n#task_categories-object-detection #license-cc-by-4.0 #region-us \n" ]
9ab9cf1dcac8d0dd59c6a3c6fecbcc2e23e64a47
# SlimOrca with Chinkara Formatting This dataset is the same as [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), but with formats acceptable for our training systems.
MaralGPT/slimorca-chinkara-format
[ "region:us" ]
2023-11-24T10:48:46+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 926595116, "num_examples": 517982}], "download_size": 448408540, "dataset_size": 926595116}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T10:51:21+00:00
[]
[]
TAGS #region-us
# SlimOrca with Chinkara Formatting This dataset is the same as SlimOrca, but with formats acceptable for our training systems.
[ "# SlimOrca with Chinkara Formatting\n\nThis dataset is the same as SlimOrca, but with formats acceptable for our training systems." ]
[ "TAGS\n#region-us \n", "# SlimOrca with Chinkara Formatting\n\nThis dataset is the same as SlimOrca, but with formats acceptable for our training systems." ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# SlimOrca with Chinkara Formatting\n\nThis dataset is the same as SlimOrca, but with formats acceptable for our training systems." ]
310122d6852934528017f743214c211911e7038a
<div align="center"> <h1> FAVDBench: Fine-grained Audible Video Description </h1> </div> <p align="center"> 🤗 <a href="https://huggingface.co/datasets/OpenNLPLab/FAVDBench" target="_blank">Hugging Face</a> • 🏠 <a href="https://github.com/OpenNLPLab/FAVDBench" target="_blank">GitHub</a> • 🤖 <a href="https://openxlab.org.cn/datasets/OpenNLPLab/FAVDBench" target="_blank">OpenDataLab</a> • 💬 <a href="https://forms.gle/5S3DWpBaV1UVczkf8" target="_blank">Apply Dataset</a> </p> [[`CVPR2023`]](https://openaccess.thecvf.com/content/CVPR2023/html/Shen_Fine-Grained_Audible_Video_Description_CVPR_2023_paper.html) [[`Project Page`]](http://www.avlbench.opennlplab.cn/papers/favd) [[`arXiv`]](https://arxiv.org/abs/2303.15616) [[`Demo`]](https://www.youtube.com/watch?v=iWJvTB-bTWk&ab_channel=OpenNLPLab)[[`BibTex`]](#Citation) [[`中文简介`]](https://mp.weixin.qq.com/s/_M57ZuOHH0UdwB6i9osqOA) - [Introduction 简介](#introduction-简介) - [Files 文件](#files-文件) - [MD5 checksum](#md5-checksum) - [Updates](#updates) - [License](#license) - [Citation](#citation) ## Introduction 简介 在CVPR2023中我们提出了精细化音视频描述任务(Fine-grained Audible Video Description, FAVD)该任务旨在提供有关可听视频的详细文本描述,包括每个对象的外观和空间位置、移动对象的动作以及视频中的声音。我们同是也为社区贡献了第一个精细化音视频描述数据集FAVDBench。对于每个视频片段,我们不仅提供一句话的视频概要,还提供4-6句描述视频的视觉细节和1-2个音频相关描述,且所有的标注都有中英文双语。 At CVPR2023, we introduced the task of Fine-grained Audible Video Description (FAVD). This task aims to provide detailed textual descriptions of audible videos, including the appearance and spatial positions of each object, the actions of moving objects, and the sounds within the video. Additionally, we contributed the first fine-grained audible video description dataset, FAVDBench, to the community. For each video segment, we offer not only a single-sentence video summary but also 4-6 sentences describing the visual details of the video and 1-2 audio-related descriptions, all annotated in both Chinese and English. ## Files 文件 * `meta`: metadata for raw videos * `train`, `val`, `test`: train, val, test split * `ytid`: youtube id * `start`: vid segments starting time in seconds * `end`: vid segments ending time in seconds * `videos` , `audios` : raw video and audio segments * `train` : train split * `val`: validation split * `test`: test split * **📢📢📢 Please refer to [Apply Dataset](https://forms.gle/5S3DWpBaV1UVczkf8) to get raw video/audio data** * `annotations_en.json` : annotated descirptions in English * `id`: unique data (video segment) id * `description`: audio-visual descriptioins * `annotations_en.json` : annotated descirptions in Chinese * `id`: unique data (video segment) id * `cap`, `des`: audio-visual descriptioins * `dcount`: count of descriptions * `experiments`: expiermental files to replicate the results outlined in the paper. * **📢📢📢 Please refer to [GitHub Repo](https://github.com/OpenNLPLab/FAVDBench) to get related data** ## MD5 checksum | file | md5sum | | :-------------------------: | :------------------------------: | | `videos/train.zip` | 41ddad46ffac339cb0b65dffc02eda65 | | `videos/val.zip` | 35291ad23944d67212c6e47b4cc6d619 | | `videos/test.zip` | 07046d205837d2e3b1f65549fc1bc4d7 | | `audios/train.zip` | 50cc83eebd84f85e9b86bbd2a7517f3f | | `audios/val.zip` | 73995c5d1fcef269cc90be8a8ef6d917 | | `audios/test.zip` | f72085feab6ca36060a0a073b31e8acc | ## Updates **Latest Version: Jan 9, 2023. Public V0.1** 1. v0.1 <Jan 9, 2023>: initial publication ## License The community usage of FAVDBench model & code requires adherence to [Apache 2.0](https://github.com/OpenNLPLab/FAVDBench/blob/main/LICENSE). The FAVDBench model & code supports commercial use. ## Citation If you use FAVD or FAVDBench in your research, please use the following BibTeX entry. ``` @InProceedings{Shen_2023_CVPR, author = {Shen, Xuyang and Li, Dong and Zhou, Jinxing and Qin, Zhen and He, Bowen and Han, Xiaodong and Li, Aixuan and Dai, Yuchao and Kong, Lingpeng and Wang, Meng and Qiao, Yu and Zhong, Yiran}, title = {Fine-Grained Audible Video Description}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {10585-10596} } ```
OpenNLPLab/FAVDBench
[ "size_categories:10K<n<100K", "language:en", "language:zh", "license:apache-2.0", "FAVD", "FAVDBench", "Video Description", "Audio Description", "Audible Video Description", "Fine-grained Description", "arxiv:2303.15616", "region:us" ]
2023-11-24T10:53:16+00:00
{"language": ["en", "zh"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "tags": ["FAVD", "FAVDBench", "Video Description", "Audio Description", "Audible Video Description", "Fine-grained Description"]}
2023-12-06T11:56:08+00:00
[ "2303.15616" ]
[ "en", "zh" ]
TAGS #size_categories-10K<n<100K #language-English #language-Chinese #license-apache-2.0 #FAVD #FAVDBench #Video Description #Audio Description #Audible Video Description #Fine-grained Description #arxiv-2303.15616 #region-us
FAVDBench: Fine-grained Audible Video Description =================================================== [Hugging Face](URL target=) • [GitHub](URL target=) • [OpenDataLab](URL target=) • [Apply Dataset](URL target=) [['CVPR2023']](URL [['Project Page']](URL [['arXiv']](URL [['Demo']](URL [['中文简介']](URL * Introduction 简介 * Files 文件 * MD5 checksum * Updates * License * Citation Introduction 简介 --------------- 在CVPR2023中我们提出了精细化音视频描述任务(Fine-grained Audible Video Description, FAVD)该任务旨在提供有关可听视频的详细文本描述,包括每个对象的外观和空间位置、移动对象的动作以及视频中的声音。我们同是也为社区贡献了第一个精细化音视频描述数据集FAVDBench。对于每个视频片段,我们不仅提供一句话的视频概要,还提供4-6句描述视频的视觉细节和1-2个音频相关描述,且所有的标注都有中英文双语。 At CVPR2023, we introduced the task of Fine-grained Audible Video Description (FAVD). This task aims to provide detailed textual descriptions of audible videos, including the appearance and spatial positions of each object, the actions of moving objects, and the sounds within the video. Additionally, we contributed the first fine-grained audible video description dataset, FAVDBench, to the community. For each video segment, we offer not only a single-sentence video summary but also 4-6 sentences describing the visual details of the video and 1-2 audio-related descriptions, all annotated in both Chinese and English. Files 文件 -------- * 'meta': metadata for raw videos + 'train', 'val', 'test': train, val, test split + 'ytid': youtube id + 'start': vid segments starting time in seconds + 'end': vid segments ending time in seconds * 'videos' , 'audios' : raw video and audio segments + 'train' : train split + 'val': validation split + 'test': test split + Please refer to Apply Dataset to get raw video/audio data * 'annotations\_en.json' : annotated descirptions in English + 'id': unique data (video segment) id + 'description': audio-visual descriptioins * 'annotations\_en.json' : annotated descirptions in Chinese + 'id': unique data (video segment) id + 'cap', 'des': audio-visual descriptioins + 'dcount': count of descriptions * 'experiments': expiermental files to replicate the results outlined in the paper. + Please refer to GitHub Repo to get related data MD5 checksum ------------ Updates ------- Latest Version: Jan 9, 2023. Public V0.1 1. v0.1 <Jan 9, 2023>: initial publication License ------- The community usage of FAVDBench model & code requires adherence to Apache 2.0. The FAVDBench model & code supports commercial use. If you use FAVD or FAVDBench in your research, please use the following BibTeX entry.
[]
[ "TAGS\n#size_categories-10K<n<100K #language-English #language-Chinese #license-apache-2.0 #FAVD #FAVDBench #Video Description #Audio Description #Audible Video Description #Fine-grained Description #arxiv-2303.15616 #region-us \n" ]
[ 72 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-English #language-Chinese #license-apache-2.0 #FAVD #FAVDBench #Video Description #Audio Description #Audible Video Description #Fine-grained Description #arxiv-2303.15616 #region-us \n" ]
7fba088cd40c2dd3ec16e6544adba1ea9f0a20c1
# Dataset Card for "CNetImg2Img-samples" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bilallllllll/CNetImg2Img
[ "region:us" ]
2023-11-24T11:00:02+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_image", "dtype": "image"}, {"name": "edit_pose", "dtype": "image"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2992719.0, "num_examples": 15}], "download_size": 2976836, "dataset_size": 2992719.0}}
2023-11-24T11:00:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CNetImg2Img-samples" More Information needed
[ "# Dataset Card for \"CNetImg2Img-samples\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CNetImg2Img-samples\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"CNetImg2Img-samples\"\n\nMore Information needed" ]
e2f884a3a5b3af01ca8c6d1ab6ade335e2f45837
kuroshiba raizo dataset 黒柴らいぞうのデータセット
mussso/kuroshiba_raizo
[ "region:us" ]
2023-11-24T11:20:20+00:00
{}
2023-11-24T11:26:14+00:00
[]
[]
TAGS #region-us
kuroshiba raizo dataset 黒柴らいぞうのデータセット
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
271733ceaa8796444f36057f8fe9c2f2c1f080ad
# Dataset of unicorn (Azur Lane) This is the dataset of unicorn (Azur Lane), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 522 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 597 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 522 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 522 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 323 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 597 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 597 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
AppleHarem/unicorn_azurlane
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-11-24T11:49:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-11-24T11:50:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of unicorn (Azur Lane) ============================== This is the dataset of unicorn (Azur Lane), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
c3072c050356ed9e04f0f136b84be6d6af3f7500
# Dataset Card for "Vi-GSM8K" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
longhoang06/Vi-GSM8K
[ "region:us" ]
2023-11-24T12:14:28+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5450234, "num_examples": 8792}], "download_size": 2753130, "dataset_size": 5450234}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T12:14:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Vi-GSM8K" More Information needed
[ "# Dataset Card for \"Vi-GSM8K\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Vi-GSM8K\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Vi-GSM8K\"\n\nMore Information needed" ]
4d2a9435345fb4e3b1eee27cb4ad1214fd406eba
# Dataset Card for "literalist_ds" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tr416/literalist_ds
[ "region:us" ]
2023-11-24T12:47:37+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 348610, "num_examples": 269}], "download_size": 182438, "dataset_size": 348610}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T12:47:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "literalist_ds" More Information needed
[ "# Dataset Card for \"literalist_ds\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"literalist_ds\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"literalist_ds\"\n\nMore Information needed" ]
8752ab6d0439899edf6724e3a16732c6cb2bc3b4
The enem 2022 and enem 2023 datasets encompass all multiple-choice questions from the last two editions of the [Exame Nacional do Ensino Médio (ENEM)](https://www.gov.br/inep/pt-br/areas-de-atuacao/avaliacao-e-exames-educacionais/enem), the main standardized entrance examination adopted by Brazilian universities. The datasets have been created to allow the evaluation of both textual-only and textual-visual language models. To evaluate textual-only models, we incorporated into the datasets the textual descriptions of the images that appear in the questions' statements from the orange ENEM exam booklet, a particular booklet that offers accessibility to people with visual impairments. A repository containing the essential code for utilizing this dataset is accessible [here](https://github.com/piresramon/gpt-4-enem). If you use this dataset in your research, please acknowledge the papers below by citing them: ```bibtex @misc{pires2023evaluating, title={Evaluating GPT-4's Vision Capabilities on Brazilian University Admission Exams}, author={Ramon Pires and Thales Sales Almeida and Hugo Abonizio and Rodrigo Nogueira}, year={2023}, eprint={2311.14169}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{nunes2023evaluating, title={Evaluating GPT-3.5 and GPT-4 Models on Brazilian University Admission Exams}, author={Desnes Nunes and Ricardo Primi and Ramon Pires and Roberto Lotufo and Rodrigo Nogueira}, year={2023}, eprint={2303.17003}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
maritaca-ai/enem
[ "task_categories:visual-question-answering", "task_categories:multiple-choice", "size_categories:n<1K", "language:pt", "license:apache-2.0", "arxiv:2311.14169", "arxiv:2303.17003", "region:us" ]
2023-11-24T12:55:21+00:00
{"language": ["pt"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["visual-question-answering", "multiple-choice"], "pretty_name": "ENEM", "configs": [{"config_name": "2022", "data_files": "2022.jsonl"}, {"config_name": "2023", "data_files": "2023.jsonl", "default": true}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "exam", "dtype": "string"}, {"name": "IU", "dtype": "bool"}, {"name": "ledor", "dtype": "bool"}, {"name": "question", "dtype": "string"}, {"name": "alternatives", "sequence": "string"}, {"name": "figures", "sequence": "string"}, {"name": "description", "sequence": "string"}, {"name": "label", "dtype": "string"}]}}
2023-12-19T19:08:47+00:00
[ "2311.14169", "2303.17003" ]
[ "pt" ]
TAGS #task_categories-visual-question-answering #task_categories-multiple-choice #size_categories-n<1K #language-Portuguese #license-apache-2.0 #arxiv-2311.14169 #arxiv-2303.17003 #region-us
The enem 2022 and enem 2023 datasets encompass all multiple-choice questions from the last two editions of the Exame Nacional do Ensino Médio (ENEM), the main standardized entrance examination adopted by Brazilian universities. The datasets have been created to allow the evaluation of both textual-only and textual-visual language models. To evaluate textual-only models, we incorporated into the datasets the textual descriptions of the images that appear in the questions' statements from the orange ENEM exam booklet, a particular booklet that offers accessibility to people with visual impairments. A repository containing the essential code for utilizing this dataset is accessible here. If you use this dataset in your research, please acknowledge the papers below by citing them:
[]
[ "TAGS\n#task_categories-visual-question-answering #task_categories-multiple-choice #size_categories-n<1K #language-Portuguese #license-apache-2.0 #arxiv-2311.14169 #arxiv-2303.17003 #region-us \n" ]
[ 74 ]
[ "passage: TAGS\n#task_categories-visual-question-answering #task_categories-multiple-choice #size_categories-n<1K #language-Portuguese #license-apache-2.0 #arxiv-2311.14169 #arxiv-2303.17003 #region-us \n" ]
47cc19edf2d047ab2dd1d72f3c695e0d51c2e4c0
# Dataset Card for "GL_GLOBAL_CF" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sshreyy/GL_GLOBAL_CF
[ "region:us" ]
2023-11-24T13:01:22+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 57539869, "num_examples": 24996}, {"name": "test", "num_bytes": 6092389, "num_examples": 2547}], "download_size": 24236436, "dataset_size": 63632258}}
2023-11-24T13:03:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "GL_GLOBAL_CF" More Information needed
[ "# Dataset Card for \"GL_GLOBAL_CF\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"GL_GLOBAL_CF\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"GL_GLOBAL_CF\"\n\nMore Information needed" ]
df275fb1696b82ead5214c7273a0afed7cc1a6c4
# Dataset Card for "gaugau-v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nguyenthanhdo/gaugau-v3
[ "region:us" ]
2023-11-24T13:02:45+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77391286.0, "num_examples": 94953}], "download_size": 30719002, "dataset_size": 77391286.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-24T13:05:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gaugau-v3" More Information needed
[ "# Dataset Card for \"gaugau-v3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gaugau-v3\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gaugau-v3\"\n\nMore Information needed" ]
0454395bab2de207f1a35d8844fea2f5000aae6a
# NORTS - Norwegian Topic-based Summarization Dataset Translated from NORTS (NEWs Topic-based Summarization Dataset, https://github.com/ali-bahrainian/NEWTS) using the 1.3B NLLB model (https://huggingface.co/facebook/nllb-200-distilled-1.3B)
tollefj/NORTS
[ "region:us" ]
2023-11-24T13:16:41+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "AssignmentId", "dtype": "string"}, {"name": "docId", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "tid1", "dtype": "int64"}, {"name": "tid2", "dtype": "int64"}, {"name": "words1", "dtype": "string"}, {"name": "words2", "dtype": "string"}, {"name": "phrases1", "dtype": "string"}, {"name": "phrases2", "dtype": "string"}, {"name": "sentences1", "dtype": "string"}, {"name": "sentences2", "dtype": "string"}, {"name": "summary1", "dtype": "string"}, {"name": "summary2", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11384802, "num_examples": 2400}, {"name": "test", "num_bytes": 2979312, "num_examples": 600}], "download_size": 7539242, "dataset_size": 14364114}}
2023-11-24T15:55:48+00:00
[]
[]
TAGS #region-us
# NORTS - Norwegian Topic-based Summarization Dataset Translated from NORTS (NEWs Topic-based Summarization Dataset, URL using the 1.3B NLLB model (URL
[ "# NORTS - Norwegian Topic-based Summarization Dataset\nTranslated from NORTS (NEWs Topic-based Summarization Dataset, URL using the 1.3B NLLB model (URL" ]
[ "TAGS\n#region-us \n", "# NORTS - Norwegian Topic-based Summarization Dataset\nTranslated from NORTS (NEWs Topic-based Summarization Dataset, URL using the 1.3B NLLB model (URL" ]
[ 6, 43 ]
[ "passage: TAGS\n#region-us \n# NORTS - Norwegian Topic-based Summarization Dataset\nTranslated from NORTS (NEWs Topic-based Summarization Dataset, URL using the 1.3B NLLB model (URL" ]
be217fbf5c0d37010d65659ab595e3fbca5faaad
# 这是一个很牛逼的数据集
Alex-Song/Test
[ "task_categories:translation", "size_categories:1K<n<10K", "language:ja", "language:zh", "language:ar", "license:apache-2.0", "music", "region:us" ]
2023-11-24T13:17:27+00:00
{"language": ["ja", "zh", "ar"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["translation"], "pretty_name": "MTSpeech", "tags": ["music"], "extra_gated_prompt": "You agree to not attempt to determine the identity of individuals in this dataset", "extra_gated_fields": {"Name": "text", "Affiliation": "text", "Email": "text", "I agree to not attempt to determine the identity of speakers in this dataset": "checkbox"}, "viewer": false}
2023-11-25T09:38:35+00:00
[]
[ "ja", "zh", "ar" ]
TAGS #task_categories-translation #size_categories-1K<n<10K #language-Japanese #language-Chinese #language-Arabic #license-apache-2.0 #music #region-us
# 这是一个很牛逼的数据集
[ "# 这是一个很牛逼的数据集" ]
[ "TAGS\n#task_categories-translation #size_categories-1K<n<10K #language-Japanese #language-Chinese #language-Arabic #license-apache-2.0 #music #region-us \n", "# 这是一个很牛逼的数据集" ]
[ 53, 8 ]
[ "passage: TAGS\n#task_categories-translation #size_categories-1K<n<10K #language-Japanese #language-Chinese #language-Arabic #license-apache-2.0 #music #region-us \n# 这是一个很牛逼的数据集" ]
18c43be3a23478cdd943d1a52e28855236ab4f58
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Torando/medical-mistral
[ "license:apache-2.0", "region:us" ]
2023-11-24T13:27:20+00:00
{"license": "apache-2.0"}
2023-11-24T13:29:13+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 14, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
c6ef525470157916c4106facba9aea6ac371c8e5
# Dataset Card for "vibhorag101/suicide_prediction_dataset_phr" - The dataset is sourced from Reddit and is available on [Kaggle](https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch). - The dataset contains text with binary labels for suicide or non-suicide. - The dataset was cleaned and following steps were applied - Converted to lowercase - Removed numbers and special characters. - Removed URLs, Emojis and accented characters. - Removed any word contractions. - Remove any extra white spaces and any extra spaces after a single space. - Removed any consecutive characters repeated more than 3 times. - Tokenised the text, then lemmatized it and then removed the stopwords (excluding not). - The `class_label` column was renamed to `label` for use with trainer API. - The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. a 80:10:10 (train:test:val) split.
vibhorag101/suicide_prediction_dataset_phr
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:mit", "region:us" ]
2023-11-24T13:27:36+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "Suicidal Tendency Prediction Dataset", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75975910.63587219, "num_examples": 185574}, {"name": "test", "num_bytes": 18994182.36412781, "num_examples": 46394}], "download_size": 53587175, "dataset_size": 94970093}}
2023-11-25T03:52:20+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #region-us
# Dataset Card for "vibhorag101/suicide_prediction_dataset_phr" - The dataset is sourced from Reddit and is available on Kaggle. - The dataset contains text with binary labels for suicide or non-suicide. - The dataset was cleaned and following steps were applied - Converted to lowercase - Removed numbers and special characters. - Removed URLs, Emojis and accented characters. - Removed any word contractions. - Remove any extra white spaces and any extra spaces after a single space. - Removed any consecutive characters repeated more than 3 times. - Tokenised the text, then lemmatized it and then removed the stopwords (excluding not). - The 'class_label' column was renamed to 'label' for use with trainer API. - The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. a 80:10:10 (train:test:val) split.
[ "# Dataset Card for \"vibhorag101/suicide_prediction_dataset_phr\"\n- The dataset is sourced from Reddit and is available on Kaggle.\n- The dataset contains text with binary labels for suicide or non-suicide. \n- The dataset was cleaned and following steps were applied\n - Converted to lowercase\n - Removed numbers and special characters.\n - Removed URLs, Emojis and accented characters.\n - Removed any word contractions.\n - Remove any extra white spaces and any extra spaces after a single space.\n - Removed any consecutive characters repeated more than 3 times.\n - Tokenised the text, then lemmatized it and then removed the stopwords (excluding not).\n - The 'class_label' column was renamed to 'label' for use with trainer API.\n- The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. a 80:10:10 (train:test:val) split." ]
[ "TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #region-us \n", "# Dataset Card for \"vibhorag101/suicide_prediction_dataset_phr\"\n- The dataset is sourced from Reddit and is available on Kaggle.\n- The dataset contains text with binary labels for suicide or non-suicide. \n- The dataset was cleaned and following steps were applied\n - Converted to lowercase\n - Removed numbers and special characters.\n - Removed URLs, Emojis and accented characters.\n - Removed any word contractions.\n - Remove any extra white spaces and any extra spaces after a single space.\n - Removed any consecutive characters repeated more than 3 times.\n - Tokenised the text, then lemmatized it and then removed the stopwords (excluding not).\n - The 'class_label' column was renamed to 'label' for use with trainer API.\n- The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. a 80:10:10 (train:test:val) split." ]
[ 38, 230 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #region-us \n# Dataset Card for \"vibhorag101/suicide_prediction_dataset_phr\"\n- The dataset is sourced from Reddit and is available on Kaggle.\n- The dataset contains text with binary labels for suicide or non-suicide. \n- The dataset was cleaned and following steps were applied\n - Converted to lowercase\n - Removed numbers and special characters.\n - Removed URLs, Emojis and accented characters.\n - Removed any word contractions.\n - Remove any extra white spaces and any extra spaces after a single space.\n - Removed any consecutive characters repeated more than 3 times.\n - Tokenised the text, then lemmatized it and then removed the stopwords (excluding not).\n - The 'class_label' column was renamed to 'label' for use with trainer API.\n- The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. a 80:10:10 (train:test:val) split." ]
b78bcd6a70c1502d7b0c92d9eedf9269f2a4d9ce
# Dataset Card for "bnf_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
manu/bnf_clean
[ "region:us" ]
2023-11-24T14:38:15+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "mean_nqa", "dtype": "float64"}, {"name": "date", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "rights", "dtype": "string"}, {"name": "original_folder", "dtype": "string"}, {"name": "perplexity", "dtype": "float64"}], "splits": [{"name": "2023", "num_bytes": 129088433.72207084, "num_examples": 441}, {"name": "2021_1", "num_bytes": 96451.66666666667, "num_examples": 5}, {"name": "2021_2", "num_bytes": 85416.8, "num_examples": 4}], "download_size": 77863123, "dataset_size": 129270302.18873751}}
2023-11-24T15:16:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bnf_clean" More Information needed
[ "# Dataset Card for \"bnf_clean\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bnf_clean\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bnf_clean\"\n\nMore Information needed" ]
6a929725683ff2e9d0ee210f67ec985bce87332b
# Dataset Card for "dolly_context_enfr" This is a filtered version of [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), then traduced to french with Deepl pro API, the best translation solution available on the market. Our goal is to gather french data on question answering on context, where the model should not bring new information not present in the context given. Our goal is to limit hallucination. The filtering have been done in three parts: - We keep only the data with a not empty context (we are not interested in random chat or not sourced information) - We don't take data where the answer is more than 1,5 times longer than the context, our study of the data showed that in those cases the information come from other sources than the context, and/or concist of a copy past of the context - For long context data (>1000 characters), we don't take data where the answer is longer than context (character wize) - We also filter around 30 data with too long context (10k character), answer (5k character) and instruction (5k character) as ther were showed to have a wrong format Our filtered version of dolly dataset only contain 3 of the 7 categories, the annotation guidelines for each of the categories were as follows: - **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. - **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. - **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. | Category | Samples | | - | - | | closed_qa | 1711 | | information_extraction | 1377 | | summarization | 1064 | Note that we considered 'brainstorming' and 'classification' data, but there are not suited for our LLM project, and very subjective (as not based on a context), so we decided to not use them. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62ce7972a1006f883519d88a/h_qjY7Tt5INoylK3oOvFA.png)
ProfessorBob/dolly_contextQA_enfr
[ "language:fr", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2023-11-24T15:03:07+00:00
{"language": ["fr", "en"], "license": "cc-by-sa-3.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "fr_context", "dtype": "string"}, {"name": "fr_response", "dtype": "string"}, {"name": "fr_instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "qid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9785049.205202311, "num_examples": 3300}, {"name": "eval", "num_bytes": 1340255.2244701348, "num_examples": 452}, {"name": "test", "num_bytes": 1186066.570327553, "num_examples": 400}], "download_size": 7746263, "dataset_size": 12311371.0}}
2024-01-10T16:23:01+00:00
[]
[ "fr", "en" ]
TAGS #language-French #language-English #license-cc-by-sa-3.0 #region-us
Dataset Card for "dolly\_context\_enfr" ======================================= This is a filtered version of databricks-dolly-15k, then traduced to french with Deepl pro API, the best translation solution available on the market. Our goal is to gather french data on question answering on context, where the model should not bring new information not present in the context given. Our goal is to limit hallucination. The filtering have been done in three parts: * We keep only the data with a not empty context (we are not interested in random chat or not sourced information) * We don't take data where the answer is more than 1,5 times longer than the context, our study of the data showed that in those cases the information come from other sources than the context, and/or concist of a copy past of the context * For long context data (>1000 characters), we don't take data where the answer is longer than context (character wize) * We also filter around 30 data with too long context (10k character), answer (5k character) and instruction (5k character) as ther were showed to have a wrong format Our filtered version of dolly dataset only contain 3 of the 7 categories, the annotation guidelines for each of the categories were as follows: * Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. * Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. * Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. Note that we considered 'brainstorming' and 'classification' data, but there are not suited for our LLM project, and very subjective (as not based on a context), so we decided to not use them. !image/png
[]
[ "TAGS\n#language-French #language-English #license-cc-by-sa-3.0 #region-us \n" ]
[ 27 ]
[ "passage: TAGS\n#language-French #language-English #license-cc-by-sa-3.0 #region-us \n" ]