sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
84d5db5670e975957fa159550afb19f4cb54ef63
# Dataset Card for Evaluation run of Open-Orca/Mistral-7B-SlimOrca ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Open-Orca__Mistral-7B-SlimOrca", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T04:55:17.464867](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__Mistral-7B-SlimOrca/blob/main/results_2023-10-24T04-55-17.464867.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.03460570469798658, "em_stderr": 0.0018718276753995743, "f1": 0.11197776845637529, "f1_stderr": 0.002382569794079873, "acc": 0.4940341305179057, "acc_stderr": 0.011521340479768794 }, "harness|drop|3": { "em": 0.03460570469798658, "em_stderr": 0.0018718276753995743, "f1": 0.11197776845637529, "f1_stderr": 0.002382569794079873 }, "harness|gsm8k|5": { "acc": 0.2137983320697498, "acc_stderr": 0.011293054698635044 }, "harness|winogrande|5": { "acc": 0.7742699289660616, "acc_stderr": 0.011749626260902543 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Open-Orca__Mistral-7B-SlimOrca
[ "region:us" ]
2023-10-11T02:20:26+00:00
{"pretty_name": "Evaluation run of Open-Orca/Mistral-7B-SlimOrca", "dataset_summary": "Dataset automatically created during the evaluation run of model [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Open-Orca__Mistral-7B-SlimOrca\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T04:55:17.464867](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__Mistral-7B-SlimOrca/blob/main/results_2023-10-24T04-55-17.464867.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03460570469798658,\n \"em_stderr\": 0.0018718276753995743,\n \"f1\": 0.11197776845637529,\n \"f1_stderr\": 0.002382569794079873,\n \"acc\": 0.4940341305179057,\n \"acc_stderr\": 0.011521340479768794\n },\n \"harness|drop|3\": {\n \"em\": 0.03460570469798658,\n \"em_stderr\": 0.0018718276753995743,\n \"f1\": 0.11197776845637529,\n \"f1_stderr\": 0.002382569794079873\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2137983320697498,\n \"acc_stderr\": 0.011293054698635044\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7742699289660616,\n \"acc_stderr\": 0.011749626260902543\n }\n}\n```", "repo_url": "https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|arc:challenge|25_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T04_55_17.464867", "path": ["**/details_harness|drop|3_2023-10-24T04-55-17.464867.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T04-55-17.464867.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T04_55_17.464867", "path": ["**/details_harness|gsm8k|5_2023-10-24T04-55-17.464867.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T04-55-17.464867.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hellaswag|10_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T03-20-03.477959.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T03-20-03.477959.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T03-20-03.477959.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T04_55_17.464867", "path": ["**/details_harness|winogrande|5_2023-10-24T04-55-17.464867.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T04-55-17.464867.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T03_20_03.477959", "path": ["results_2023-10-11T03-20-03.477959.parquet"]}, {"split": "2023_10_24T04_55_17.464867", "path": ["results_2023-10-24T04-55-17.464867.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T04-55-17.464867.parquet"]}]}]}
2023-10-24T03:55:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Open-Orca/Mistral-7B-SlimOrca ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Open-Orca/Mistral-7B-SlimOrca on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T04:55:17.464867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Open-Orca/Mistral-7B-SlimOrca", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Open-Orca/Mistral-7B-SlimOrca on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T04:55:17.464867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Open-Orca/Mistral-7B-SlimOrca", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Open-Orca/Mistral-7B-SlimOrca on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T04:55:17.464867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 23, 31, 171, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Open-Orca/Mistral-7B-SlimOrca## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Open-Orca/Mistral-7B-SlimOrca on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T04:55:17.464867(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
46606e2cfddf806161b616058d84ecc85f3c0633
# Dataset Card for Evaluation run of sequelbox/StellarBright ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/sequelbox/StellarBright - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [sequelbox/StellarBright](https://huggingface.co/sequelbox/StellarBright) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_sequelbox__StellarBright_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-08T22:55:36.010619](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__StellarBright_public/blob/main/results_2023-11-08T22-55-36.010619.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.34458892617449666, "em_stderr": 0.004866841438021566, "f1": 0.4966107382550379, "f1_stderr": 0.004389897684698882, "acc": 0.613835910465284, "acc_stderr": 0.011977981888400647 }, "harness|drop|3": { "em": 0.34458892617449666, "em_stderr": 0.004866841438021566, "f1": 0.4966107382550379, "f1_stderr": 0.004389897684698882 }, "harness|gsm8k|5": { "acc": 0.3949962092494314, "acc_stderr": 0.01346535496997321 }, "harness|winogrande|5": { "acc": 0.8326756116811366, "acc_stderr": 0.010490608806828082 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_sequelbox__StellarBright
[ "region:us" ]
2023-10-11T02:35:24+00:00
{"pretty_name": "Evaluation run of sequelbox/StellarBright", "dataset_summary": "Dataset automatically created during the evaluation run of model [sequelbox/StellarBright](https://huggingface.co/sequelbox/StellarBright) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sequelbox__StellarBright_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T22:55:36.010619](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__StellarBright_public/blob/main/results_2023-11-08T22-55-36.010619.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.34458892617449666,\n \"em_stderr\": 0.004866841438021566,\n \"f1\": 0.4966107382550379,\n \"f1_stderr\": 0.004389897684698882,\n \"acc\": 0.613835910465284,\n \"acc_stderr\": 0.011977981888400647\n },\n \"harness|drop|3\": {\n \"em\": 0.34458892617449666,\n \"em_stderr\": 0.004866841438021566,\n \"f1\": 0.4966107382550379,\n \"f1_stderr\": 0.004389897684698882\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3949962092494314,\n \"acc_stderr\": 0.01346535496997321\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8326756116811366,\n \"acc_stderr\": 0.010490608806828082\n }\n}\n```", "repo_url": "https://huggingface.co/sequelbox/StellarBright", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T22_55_36.010619", "path": ["**/details_harness|drop|3_2023-11-08T22-55-36.010619.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T22-55-36.010619.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T22_55_36.010619", "path": ["**/details_harness|gsm8k|5_2023-11-08T22-55-36.010619.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T22-55-36.010619.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T22_55_36.010619", "path": ["**/details_harness|winogrande|5_2023-11-08T22-55-36.010619.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T22-55-36.010619.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T22_55_36.010619", "path": ["results_2023-11-08T22-55-36.010619.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T22-55-36.010619.parquet"]}]}]}
2023-12-01T14:53:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of sequelbox/StellarBright ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model sequelbox/StellarBright on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-08T22:55:36.010619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of sequelbox/StellarBright", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model sequelbox/StellarBright on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-08T22:55:36.010619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of sequelbox/StellarBright", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model sequelbox/StellarBright on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-08T22:55:36.010619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 17, 31, 166, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of sequelbox/StellarBright## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model sequelbox/StellarBright on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T22:55:36.010619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
cdd12d61041ef3d862f1da58c9dd5e4a52035b90
# Dataset Card for Evaluation run of nicholasKluge/Aira-2-1B1 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/nicholasKluge/Aira-2-1B1 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [nicholasKluge/Aira-2-1B1](https://huggingface.co/nicholasKluge/Aira-2-1B1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-28T21:10:09.123262](https://huggingface.co/datasets/open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1/blob/main/results_2023-10-28T21-10-09.123262.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0, "em_stderr": 0.0, "f1": 0.003932466442953021, "f1_stderr": 0.00031476990050976393, "acc": 0.2513812154696133, "acc_stderr": 0.007026135605808218 }, "harness|drop|3": { "em": 0.0, "em_stderr": 0.0, "f1": 0.003932466442953021, "f1_stderr": 0.00031476990050976393 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.5027624309392266, "acc_stderr": 0.014052271211616436 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1
[ "region:us" ]
2023-10-11T02:37:19+00:00
{"pretty_name": "Evaluation run of nicholasKluge/Aira-2-1B1", "dataset_summary": "Dataset automatically created during the evaluation run of model [nicholasKluge/Aira-2-1B1](https://huggingface.co/nicholasKluge/Aira-2-1B1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T21:10:09.123262](https://huggingface.co/datasets/open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1/blob/main/results_2023-10-28T21-10-09.123262.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.003932466442953021,\n \"f1_stderr\": 0.00031476990050976393,\n \"acc\": 0.2513812154696133,\n \"acc_stderr\": 0.007026135605808218\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.003932466442953021,\n \"f1_stderr\": 0.00031476990050976393\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5027624309392266,\n \"acc_stderr\": 0.014052271211616436\n }\n}\n```", "repo_url": "https://huggingface.co/nicholasKluge/Aira-2-1B1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|arc:challenge|25_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T21_10_09.123262", "path": ["**/details_harness|drop|3_2023-10-28T21-10-09.123262.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T21-10-09.123262.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T21_10_09.123262", "path": ["**/details_harness|gsm8k|5_2023-10-28T21-10-09.123262.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T21-10-09.123262.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hellaswag|10_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T03-37-00.814670.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T03-37-00.814670.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T03-37-00.814670.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T21_10_09.123262", "path": ["**/details_harness|winogrande|5_2023-10-28T21-10-09.123262.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T21-10-09.123262.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T03_37_00.814670", "path": ["results_2023-10-11T03-37-00.814670.parquet"]}, {"split": "2023_10_28T21_10_09.123262", "path": ["results_2023-10-28T21-10-09.123262.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T21-10-09.123262.parquet"]}]}]}
2023-10-28T20:10:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of nicholasKluge/Aira-2-1B1 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model nicholasKluge/Aira-2-1B1 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-28T21:10:09.123262(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of nicholasKluge/Aira-2-1B1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model nicholasKluge/Aira-2-1B1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T21:10:09.123262(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of nicholasKluge/Aira-2-1B1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model nicholasKluge/Aira-2-1B1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T21:10:09.123262(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 21, 31, 169, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of nicholasKluge/Aira-2-1B1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model nicholasKluge/Aira-2-1B1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T21:10:09.123262(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
1037832f14f1b474577bdd35a48c8042a8d72a2a
# Dataset Card for Evaluation run of openaccess-ai-collective/jackalope-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/openaccess-ai-collective/jackalope-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [openaccess-ai-collective/jackalope-7b](https://huggingface.co/openaccess-ai-collective/jackalope-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_openaccess-ai-collective__jackalope-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T19:34:20.159933](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__jackalope-7b/blob/main/results_2023-10-24T19-34-20.159933.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.008703859060402684, "em_stderr": 0.0009512557261398897, "f1": 0.07785130033557026, "f1_stderr": 0.0016803312427089365, "acc": 0.5335823999071311, "acc_stderr": 0.012043055014472743 }, "harness|drop|3": { "em": 0.008703859060402684, "em_stderr": 0.0009512557261398897, "f1": 0.07785130033557026, "f1_stderr": 0.0016803312427089365 }, "harness|gsm8k|5": { "acc": 0.28658074298711145, "acc_stderr": 0.012454841668337704 }, "harness|winogrande|5": { "acc": 0.7805840568271507, "acc_stderr": 0.01163126836060778 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_openaccess-ai-collective__jackalope-7b
[ "region:us" ]
2023-10-11T03:09:03+00:00
{"pretty_name": "Evaluation run of openaccess-ai-collective/jackalope-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [openaccess-ai-collective/jackalope-7b](https://huggingface.co/openaccess-ai-collective/jackalope-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openaccess-ai-collective__jackalope-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T19:34:20.159933](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__jackalope-7b/blob/main/results_2023-10-24T19-34-20.159933.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.008703859060402684,\n \"em_stderr\": 0.0009512557261398897,\n \"f1\": 0.07785130033557026,\n \"f1_stderr\": 0.0016803312427089365,\n \"acc\": 0.5335823999071311,\n \"acc_stderr\": 0.012043055014472743\n },\n \"harness|drop|3\": {\n \"em\": 0.008703859060402684,\n \"em_stderr\": 0.0009512557261398897,\n \"f1\": 0.07785130033557026,\n \"f1_stderr\": 0.0016803312427089365\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.28658074298711145,\n \"acc_stderr\": 0.012454841668337704\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7805840568271507,\n \"acc_stderr\": 0.01163126836060778\n }\n}\n```", "repo_url": "https://huggingface.co/openaccess-ai-collective/jackalope-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|arc:challenge|25_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T19_34_20.159933", "path": ["**/details_harness|drop|3_2023-10-24T19-34-20.159933.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T19-34-20.159933.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T19_34_20.159933", "path": ["**/details_harness|gsm8k|5_2023-10-24T19-34-20.159933.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T19-34-20.159933.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hellaswag|10_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T04-08-39.650186.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T04-08-39.650186.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T04-08-39.650186.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T19_34_20.159933", "path": ["**/details_harness|winogrande|5_2023-10-24T19-34-20.159933.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T19-34-20.159933.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T04_08_39.650186", "path": ["results_2023-10-11T04-08-39.650186.parquet"]}, {"split": "2023_10_24T19_34_20.159933", "path": ["results_2023-10-24T19-34-20.159933.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T19-34-20.159933.parquet"]}]}]}
2023-10-24T18:34:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of openaccess-ai-collective/jackalope-7b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model openaccess-ai-collective/jackalope-7b on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T19:34:20.159933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of openaccess-ai-collective/jackalope-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/jackalope-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T19:34:20.159933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of openaccess-ai-collective/jackalope-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/jackalope-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T19:34:20.159933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 22, 31, 170, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openaccess-ai-collective/jackalope-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/jackalope-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T19:34:20.159933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
ea3a44b1b1bf906de509c54acc8c68593d9aff01
# Dataset Card for "sur_test_rt5_few_8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carnival13/sur_test_rt5_few_8
[ "region:us" ]
2023-10-11T03:33:48+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 656906195, "num_examples": 900000}], "download_size": 161337040, "dataset_size": 656906195}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T03:34:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sur_test_rt5_few_8" More Information needed
[ "# Dataset Card for \"sur_test_rt5_few_8\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sur_test_rt5_few_8\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sur_test_rt5_few_8\"\n\nMore Information needed" ]
170963084ec775d403dfaca029e458c222394305
# Dataset Card for "ecc_crackdetector_dataset_exhaustive" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rishitunu/ecc_crackdetector_dataset_exhaustive
[ "region:us" ]
2023-10-11T03:34:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 13168386.682, "num_examples": 1289}], "download_size": 11961853, "dataset_size": 13168386.682}}
2023-10-11T03:50:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ecc_crackdetector_dataset_exhaustive" More Information needed
[ "# Dataset Card for \"ecc_crackdetector_dataset_exhaustive\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ecc_crackdetector_dataset_exhaustive\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ecc_crackdetector_dataset_exhaustive\"\n\nMore Information needed" ]
9a094bd039939ad704d882f661d0fa967e18ddaa
# Dataset Card for "Book4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
W1lson/Book4
[ "region:us" ]
2023-10-11T03:34:39+00:00
{"dataset_info": {"features": [{"name": "Source ID", "dtype": "int64"}, {"name": "Primary Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9831, "num_examples": 87}], "download_size": 0, "dataset_size": 9831}}
2023-10-11T03:44:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Book4" More Information needed
[ "# Dataset Card for \"Book4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Book4\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Book4\"\n\nMore Information needed" ]
82f26fba21fb0c25231aeaa2f8296f6be334ec36
# CIVQA EasyOCR LayoutLM Train Dataset The CIVQA (Czech Invoice Visual Question Answering) dataset was created with EasyOCR, and it is encoded for LayoutLM models. This dataset contains only the train split. The validation part of the dataset can be found on this URL: https://huggingface.co/datasets/fimu-docproc-research/CIVQA_EasyOCR_LayoutLM_Validation The pre-encoded train dataset can be found on this link: https://huggingface.co/datasets/fimu-docproc-research/CIVQA_EasyOCR_Train All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices. - Invoice number - Variable symbol - Specific symbol - Constant symbol - Bank code - Account number - ICO - Total amount - Invoice date - Due date - Name of supplier - IBAN - DIC - QR code - Supplier's address The invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: https://forms.gle/tUVJKoB22oeTncUD6 We profoundly appreciate your cooperation and understanding in this matter.
fimu-docproc-research/CIVQA_EasyOCR_LayoutLM_Train
[ "language:cs", "license:mit", "finance", "region:us" ]
2023-10-11T03:37:51+00:00
{"language": ["cs"], "license": "mit", "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "bbox", "dtype": {"array2_d": {"shape": [512, 4], "dtype": "int32"}}}, {"name": "attention_mask", "sequence": "int32"}, {"name": "image", "dtype": {"array3_d": {"shape": [3, 224, 224], "dtype": "int32"}}}, {"name": "start_positions", "dtype": "int32"}, {"name": "end_positions", "dtype": "int32"}, {"name": "questions", "dtype": "string"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89021492745, "num_examples": 143765}], "download_size": 913954164, "dataset_size": 89021492745}, "tags": ["finance"]}
2023-11-21T20:49:21+00:00
[]
[ "cs" ]
TAGS #language-Czech #license-mit #finance #region-us
# CIVQA EasyOCR LayoutLM Train Dataset The CIVQA (Czech Invoice Visual Question Answering) dataset was created with EasyOCR, and it is encoded for LayoutLM models. This dataset contains only the train split. The validation part of the dataset can be found on this URL: URL The pre-encoded train dataset can be found on this link: URL All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices. - Invoice number - Variable symbol - Specific symbol - Constant symbol - Bank code - Account number - ICO - Total amount - Invoice date - Due date - Name of supplier - IBAN - DIC - QR code - Supplier's address The invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL We profoundly appreciate your cooperation and understanding in this matter.
[ "# CIVQA EasyOCR LayoutLM Train Dataset\n\nThe CIVQA (Czech Invoice Visual Question Answering) dataset was created with EasyOCR, and it is encoded for LayoutLM models. This dataset contains only the train split. The validation part of the dataset can be found on this URL: URL \nThe pre-encoded train dataset can be found on this link: URL\n\nAll invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.\n- Invoice number\n- Variable symbol\n- Specific symbol\n- Constant symbol\n- Bank code\n- Account number\n- ICO\n- Total amount\n- Invoice date\n- Due date\n- Name of supplier\n- IBAN\n- DIC\n- QR code\n- Supplier's address\n\nThe invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL\n\nWe profoundly appreciate your cooperation and understanding in this matter." ]
[ "TAGS\n#language-Czech #license-mit #finance #region-us \n", "# CIVQA EasyOCR LayoutLM Train Dataset\n\nThe CIVQA (Czech Invoice Visual Question Answering) dataset was created with EasyOCR, and it is encoded for LayoutLM models. This dataset contains only the train split. The validation part of the dataset can be found on this URL: URL \nThe pre-encoded train dataset can be found on this link: URL\n\nAll invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.\n- Invoice number\n- Variable symbol\n- Specific symbol\n- Constant symbol\n- Bank code\n- Account number\n- ICO\n- Total amount\n- Invoice date\n- Due date\n- Name of supplier\n- IBAN\n- DIC\n- QR code\n- Supplier's address\n\nThe invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL\n\nWe profoundly appreciate your cooperation and understanding in this matter." ]
[ 20, 290 ]
[ "passage: TAGS\n#language-Czech #license-mit #finance #region-us \n# CIVQA EasyOCR LayoutLM Train Dataset\n\nThe CIVQA (Czech Invoice Visual Question Answering) dataset was created with EasyOCR, and it is encoded for LayoutLM models. This dataset contains only the train split. The validation part of the dataset can be found on this URL: URL \nThe pre-encoded train dataset can be found on this link: URL\n\nAll invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.\n- Invoice number\n- Variable symbol\n- Specific symbol\n- Constant symbol\n- Bank code\n- Account number\n- ICO\n- Total amount\n- Invoice date\n- Due date\n- Name of supplier\n- IBAN\n- DIC\n- QR code\n- Supplier's address\n\nThe invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL\n\nWe profoundly appreciate your cooperation and understanding in this matter." ]
697aa00334eebe57ec6e13b8217ee0150d8a8a08
# Dataset Card for "ddb740fe" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/ddb740fe
[ "region:us" ]
2023-10-11T03:44:45+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 201, "num_examples": 10}], "download_size": 1398, "dataset_size": 201}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T03:44:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ddb740fe" More Information needed
[ "# Dataset Card for \"ddb740fe\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ddb740fe\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ddb740fe\"\n\nMore Information needed" ]
1273f5a747b18ecefd0042804e2cfa6838859773
# Dataset Card for "ISIC_Melanoma" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MegPaulson/ISIC_Melanoma
[ "region:us" ]
2023-10-11T03:49:24+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_seg", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55448028.0, "num_examples": 438}], "download_size": 54990564, "dataset_size": 55448028.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T04:22:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ISIC_Melanoma" More Information needed
[ "# Dataset Card for \"ISIC_Melanoma\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ISIC_Melanoma\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ISIC_Melanoma\"\n\nMore Information needed" ]
3adf26a8241078ddd5f8731f778c1e76ca851d2b
# Dataset Card for SuperDialseg ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation - **Leaderboard:** [https://github.com/Coldog2333/SuperDialseg](https://github.com/Coldog2333/SuperDialseg) - **Point of Contact:** [email protected] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages: English ## Dataset Structure ### Data Instances ``` { "dial_data": { "super_dialseg": [ { "dial_id": "8df07b7a98990db27c395cb1f68a962e", "turns": [ { "da": "query_condition", "role": "user", "turn_id": 0, "utterance": "Hello, I forgot o update my address, can you help me with that?", "topic_id": 0, "segmentation_label": 0 }, ... { "da": "respond_solution", "role": "agent", "turn_id": 11, "utterance": "DO NOT contact the New York State DMV to dispute whether you violated a toll regulation or failed to pay the toll , fees or other charges", "topic_id": 4, "segmentation_label": 0 } ], ... } ] } ``` ### Data Fields #### Dialogue-Level + `dial_id`: ID of a dialogue; + `turns`: All utterances of a dialogue. #### Utterance-Level + `da`: Dialogue Act annotation derived from the original DGDS dataset; + `role`: Role annotation derived from the original DGDS dataset; + `turn_id`: ID of an utterance; + `utterance`: Text of the utterance; + `topic_id`: ID (order) of the current topic; + `segmentation_label`: 1: it is the end of a topic; 0: others. ### Data Splits SuperDialseg follows the dataset splits of the original DGDS dataset. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization SuperDialseg was built on the top of doc2dial and MultiDoc2dial datasets. Please refer to the original papers for more details. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotation of dialogue segmentation points is constructed by a set of well-designed strategy. Please refer to the paper for more details. Other annotations like Dialogue Act and Role information are derived from doc2dial and MultiDoc2dial datasets. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Apache License Version 2.0, following the licenses of doc2dial and MultiDoc2dial. ### Citation Information Coming soon ### Contributions Thanks to [@Coldog2333](https://github.com/Coldog2333) for adding this dataset.
Coldog2333/super_dialseg
[ "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "dialogue segmentation", "region:us" ]
2023-10-11T04:28:11+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "tags": ["dialogue segmentation"]}
2023-10-11T05:26:51+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-apache-2.0 #dialogue segmentation #region-us
# Dataset Card for SuperDialseg ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation - Leaderboard: URL - Point of Contact: [email protected] ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages: English ## Dataset Structure ### Data Instances ### Data Fields #### Dialogue-Level + 'dial_id': ID of a dialogue; + 'turns': All utterances of a dialogue. #### Utterance-Level + 'da': Dialogue Act annotation derived from the original DGDS dataset; + 'role': Role annotation derived from the original DGDS dataset; + 'turn_id': ID of an utterance; + 'utterance': Text of the utterance; + 'topic_id': ID (order) of the current topic; + 'segmentation_label': 1: it is the end of a topic; 0: others. ### Data Splits SuperDialseg follows the dataset splits of the original DGDS dataset. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization SuperDialseg was built on the top of doc2dial and MultiDoc2dial datasets. Please refer to the original papers for more details. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? The annotation of dialogue segmentation points is constructed by a set of well-designed strategy. Please refer to the paper for more details. Other annotations like Dialogue Act and Role information are derived from doc2dial and MultiDoc2dial datasets. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Apache License Version 2.0, following the licenses of doc2dial and MultiDoc2dial. Coming soon ### Contributions Thanks to @Coldog2333 for adding this dataset.
[ "# Dataset Card for SuperDialseg", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: \n- Paper: SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation\n- Leaderboard: URL\n- Point of Contact: [email protected]", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages: English", "## Dataset Structure", "### Data Instances", "### Data Fields", "#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.", "#### Utterance-Level\n+ 'da': Dialogue Act annotation derived from the original DGDS dataset;\n+ 'role': Role annotation derived from the original DGDS dataset;\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.", "### Data Splits\n\nSuperDialseg follows the dataset splits of the original DGDS dataset.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nSuperDialseg was built on the top of doc2dial and MultiDoc2dial datasets.\nPlease refer to the original papers for more details.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe annotation of dialogue segmentation points is constructed by a set of well-designed strategy. Please refer to the paper for more details.\n\nOther annotations like Dialogue Act and Role information are derived from doc2dial and MultiDoc2dial datasets.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nApache License Version 2.0, following the licenses of doc2dial and MultiDoc2dial.\n\n\n\nComing soon", "### Contributions\n\nThanks to @Coldog2333 for adding this dataset." ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-apache-2.0 #dialogue segmentation #region-us \n", "# Dataset Card for SuperDialseg", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: \n- Paper: SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation\n- Leaderboard: URL\n- Point of Contact: [email protected]", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages: English", "## Dataset Structure", "### Data Instances", "### Data Fields", "#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.", "#### Utterance-Level\n+ 'da': Dialogue Act annotation derived from the original DGDS dataset;\n+ 'role': Role annotation derived from the original DGDS dataset;\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.", "### Data Splits\n\nSuperDialseg follows the dataset splits of the original DGDS dataset.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nSuperDialseg was built on the top of doc2dial and MultiDoc2dial datasets.\nPlease refer to the original papers for more details.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe annotation of dialogue segmentation points is constructed by a set of well-designed strategy. Please refer to the paper for more details.\n\nOther annotations like Dialogue Act and Role information are derived from doc2dial and MultiDoc2dial datasets.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nApache License Version 2.0, following the licenses of doc2dial and MultiDoc2dial.\n\n\n\nComing soon", "### Contributions\n\nThanks to @Coldog2333 for adding this dataset." ]
[ 36, 9, 125, 55, 6, 10, 6, 6, 6, 5, 34, 117, 25, 5, 7, 4, 46, 10, 5, 5, 70, 8, 8, 7, 8, 7, 5, 6, 32, 18 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-apache-2.0 #dialogue segmentation #region-us \n# Dataset Card for SuperDialseg## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository: \n- Paper: SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation\n- Leaderboard: URL\n- Point of Contact: [email protected]### Dataset Summary### Supported Tasks and Leaderboards### Languages: English## Dataset Structure### Data Instances### Data Fields#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.#### Utterance-Level\n+ 'da': Dialogue Act annotation derived from the original DGDS dataset;\n+ 'role': Role annotation derived from the original DGDS dataset;\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.### Data Splits\n\nSuperDialseg follows the dataset splits of the original DGDS dataset.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\nSuperDialseg was built on the top of doc2dial and MultiDoc2dial datasets.\nPlease refer to the original papers for more details." ]
a6796ef9b4ef491cea5f47981050600cc4ca2cc9
# Dataset Card for "jy_finetune_sd" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dhkim123/jy_finetune_sd
[ "region:us" ]
2023-10-11T04:38:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 37668449.2, "num_examples": 1300}], "download_size": 35715363, "dataset_size": 37668449.2}}
2023-10-11T20:56:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "jy_finetune_sd" More Information needed
[ "# Dataset Card for \"jy_finetune_sd\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"jy_finetune_sd\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"jy_finetune_sd\"\n\nMore Information needed" ]
1af38e7611380e588341bfe7d72b0fcdeac3dd67
# Dataset Card for "RMData" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
W1lson/RMData
[ "region:us" ]
2023-10-11T04:38:59+00:00
{"dataset_info": {"features": [{"name": "Source ID", "dtype": "int64"}, {"name": "Primary Text", "dtype": "string"}, {"name": "Artifact Type", "dtype": "string"}, {"name": "Design Package", "dtype": "string"}, {"name": "Location", "dtype": "string"}, {"name": "Verification Method", "dtype": "string"}, {"name": "Validation Method", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6326, "num_examples": 35}], "download_size": 7719, "dataset_size": 6326}}
2023-10-11T04:39:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "RMData" More Information needed
[ "# Dataset Card for \"RMData\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"RMData\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"RMData\"\n\nMore Information needed" ]
7cb32852510c77bedaf51f3148bdf828f5356fa3
# Dataset Card for "three_layouts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sankettgorey/three_layouts
[ "region:us" ]
2023-10-11T04:59:50+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 449387243.1132901, "num_examples": 1442}, {"name": "test", "num_bytes": 55106176.92124237, "num_examples": 181}, {"name": "validation", "num_bytes": 55521421.31946755, "num_examples": 180}], "download_size": 469923853, "dataset_size": 560014841.354}}
2023-10-11T05:09:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "three_layouts" More Information needed
[ "# Dataset Card for \"three_layouts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"three_layouts\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"three_layouts\"\n\nMore Information needed" ]
09afe2ec7907c1412c8f0f850b9bc21dbf28757c
# Dataset Card for "ad-copy-generation" Formatted the dataset https://huggingface.co/datasets/jaykin01/advertisement-copy to follow the Llama V2 chat template for instruction tuning. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
smangrul/ad-copy-generation
[ "region:us" ]
2023-10-11T05:08:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 445199.82471516216, "num_examples": 1000}, {"name": "test", "num_bytes": 62773.17528483786, "num_examples": 141}], "download_size": 194198, "dataset_size": 507973.0}}
2023-10-11T05:10:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ad-copy-generation" Formatted the dataset URL to follow the Llama V2 chat template for instruction tuning. More Information needed
[ "# Dataset Card for \"ad-copy-generation\"\n\nFormatted the dataset URL to follow the Llama V2 chat template for instruction tuning.\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ad-copy-generation\"\n\nFormatted the dataset URL to follow the Llama V2 chat template for instruction tuning.\n\nMore Information needed" ]
[ 6, 37 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ad-copy-generation\"\n\nFormatted the dataset URL to follow the Llama V2 chat template for instruction tuning.\n\nMore Information needed" ]
f01bf171de3106156c2f23f3fd57e0b4b5f8fd8f
<h2 style="text-align: left;"><a href="https://www.globalfitnessmart.com/get-progenifix"><span style="background-color: #ffcc00; color: blue;"><strong>{</strong><strong>Progenifix - Official Website -- Order Now}</strong></span></a></h2> <h2><strong>➡️<span style="color: #ff9900;">● For Order Official Website - <a href="https://www.globalfitnessmart.com/get-progenifix">https://www.globalfitnessmart.com/get-progenifix</a></span></strong><br /><strong>➡️<span style="color: #33cccc;">● Item Name: &mdash; <a href="https://www.globalfitnessmart.com/get-progenifix">Progenifix</a></span></strong><br /><strong>➡️<span style="color: #99cc00;">● Ingredients: &mdash; All Natural</span></strong><br /><strong>➡️<span style="color: #339966;">● Incidental Effects: &mdash; NA</span></strong><br /><strong>➡️<span style="color: purple;"><span style="color: red;">● Accessibility: &mdash; <a href="https://www.globalfitnessmart.com/get-progenifix">Online</a></span><br /></span></strong></h2> <h2><a href="https://www.globalfitnessmart.com/get-progenifix"><strong><span style="background-color: #ffcc00; color: blue;">➡️Hurry Up - Limited Time Offer - Purchase Now</span></strong></a><br /><a href="https://www.globalfitnessmart.com/get-progenifix"><strong><span style="background-color: #ffcc00; color: blue;">➡️Hurry Up - Limited Time Offer - Purchase Now</span></strong></a><br /><a href="https://www.globalfitnessmart.com/get-progenifix"><strong><span style="background-color: #ffcc00; color: blue;">➡️Hurry Up - Limited Time Offer - Purchase Now</span></strong></a></h2> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-progenifix"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIb1ZBKUI-c3pbno8_SGlZzeB2XmfIgAcDtuAGXaq2SzcI2uUX2EhU5QZmLGCDEIM22ljE0oxK7v9tv9L_Ash8gQg3qC9HNKQvQbLmmNjjDvTiI8DES1v1jiRh8BfxNAhU-tRNR8ZrEJEhqsSgYNVfTLcHoE9wrHigWjR9zfdO4SSsRpKJfGg4nhIfFw8p/w640-h480/Progenifix%2010.png" alt="" width="640" height="480" border="0" data-original-height="417" data-original-width="556" /></a></div> <h2><strong>Progenifix Weight Loss Supplement: Reviews and Extensive guide 2023 Read Before</strong></h2> <p>Progenifix is a supplement that helps consumers to improve weight loss and support good health. The formula is easy to take every day, though some consumers will notice that they burn through weight more rapidly than they ever have before.</p> <h2><strong>What is Progenifix?</strong></h2> <p>If you are looking to lose weight, you may have heard of Progenifix. This weight loss supplement is designed to help you burn fat and lose weight quickly. But what is Progenifix and how does it work? In this article, we will take a closer look at the Progenifix weight loss supplement and provide an in-depth review.</p> <p>Progenifix is a weight loss supplement that contains a powerful blend of ingredients that are designed to help you burn fat and lose weight quickly. Other ingredients in Progenifix include green coffee bean extract, green tea extract, and African mango seed extract. These ingredients are all known to be effective for weight loss and they work together to help you burn fat quickly.</p> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="https://www.globalfitnessmart.com/get-progenifix"><strong>SPECIAL PROMO: Get Progenifix at the Lowest Discounted Price Online</strong></a></span></h2> <h2><strong>How Does Progenifix Work?</strong></h2> <p>Progenifix is a supplement that utilizes natural, scientifically supported ingredients to enhance weight loss results and improve overall well-being. According to the product's official website, it has three primary benefits, which are:</p> <ul> <li>Supporting weight loss</li> <li>Promoting wellness and vitality</li> <li>Supporting immune system</li> </ul> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-progenifix"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnniR87uocacvkq4TTxlfwddQU1kvf3128afknQr-lWE2a82lZWewC7W5Ik-UwQUYMZox5t7T3SqHzcT85N3rlokkVNZooQaGQMKhstlhrv3MaTvGG9TH1BxttNJc-m8p9JF840IhyphenhyphenpcTHnUEnF1mbYT0CXCv7wX23Spp3b9_FadKCc2kO8tOl5VpUjQKj/w640-h360/Progenifix-Ingredients.webp" alt="" width="640" height="360" border="0" data-original-height="1260" data-original-width="2240" /></a></div> <h2><strong>What is in Progenifix Natural Ingredients</strong></h2> <p>Exclusively offering natural ingredients, the Progenifix formula includes the following mushrooms:</p> <p><strong>#Royal Sun Agaricus</strong></p> <p>Royal Sun Mushrooms are incredibly supportive of the immune system and its response to potential health threats. With a high abundance of immunomodulating polysaccharides, consumers inherently reduce their risk of infection, allergic reactions, and even asthma, according to early research with mice. It can also reduce inflammation, especially for consumers with inflammatory bowel disease.</p> <p><strong>#Cordyceps Sinensis</strong></p> <p>Cordyceps Sinensis is a common mushroom, and traditional healers use it in Sikkim. These experts believe this mushroom can help with all kinds of ailments when treated like a tonic. With the proper preparation, the creators sometimes use it to boost energy, regulate the appetite, and promote better endurance and stamina.</p> <p><strong>#Chaga</strong></p> <p>Based on current evidence, Chaga mushrooms have antioxidants that can soothe arthritis and reduce high blood pressure. This mushroom eases inflammation throughout the body, including the stomach lining and the joints. By dealing with inflammation, consumers can adequately digest their food without pain.</p> <p><strong>#Lion's Mane</strong></p> <p>Lion's Mane Mushrooms are quite a sight to see, and they are just as beneficial as they are interesting to look at. Current research links the use of lion's mane mushrooms to preventing problems like dementia, heart disease, and cancer. When used by animals, this mushroom reduces the risk of diabetes as well.<br />Many of the benefits consumers reap with Lion's Mane Mushroom are in the brain.</p> <p><strong>#White Button</strong></p> <p>The final mushroom of this blend is usually found in the produce section of even the smallest grocery stores &ndash; white button mushrooms. They embody what consumers often think of when they hear "mushrooms," but they can also greatly benefit the user's health.</p> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="https://www.globalfitnessmart.com/get-progenifix"><strong>SPECIAL PROMO[Limited Discount]: "Progenifix USA"Official Website!</strong></a></span></h2> <h2><strong>The Benefits of Progenifix Supplement</strong></h2> <p>● Progenifix supplement helps eliminate excess fat stored in the body</p> <p>● The ingredients in the Progenifix formula are rich in fiber which helps suppress appetite and food cravings</p> <p>● Progenifix is rich in anti-aging compounds which support metabolic functions</p> <p>● Progenifix supplement helps improve energy levels, mood, and mental clarity</p> <p>● The formula assists in strengthening the immune system by reducing inflammation</p> <p>● It has antioxidants that prevent damage from free radicals and oxidative stress</p> <p>● Progenifix formula can treat digestive and circulatory disorders and reduce the symptoms of diabetes</p> <p>● The supplement provides the necessary nutrients the body needs to stay healthy</p> <p>● The supplement targets the root cause of slow metabolism and restores regular metabolic activity</p> <p>● The formula provides better sleep and boosts confidence and self-esteem</p> <h2><strong>How to Use Progenifix</strong></h2> <p>Assuming you are referring to the weight loss supplement known as Progenifix: When using any weight loss supplement, it is important to follow the instructions on the product label. It is also important to consult with a healthcare professional before starting any new supplement, especially if you have any medical conditions or are taking any medications.</p> <p>Progenifix is a dietary supplement that comes in capsule form. The recommended dose is two capsules per day, taken with water. It is best to take the capsules with meals. For best results, it is recommended to use Progenifix for at least 8 weeks.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-progenifix"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Vy_IcQ9rpiawvyigHopsJt-HE9IHu3YESbkI8uX_eQfQF4ZjVFm1ABW4ZioTHDgbstKigkedI7ZguHXPwRfiD9qM_X6HCFE6iWcCzLtrOqwUjLd8X_O1YcA8Qz5kZI9up7IZxjMer3T57voKXMPy7q_uSfI2XD4i6CYe__3zZgTqnzR70CyY52q-7vAz/w266-h400/Progenifix%20weight%20loss.jpg" alt="" width="266" height="400" border="0" data-original-height="1104" data-original-width="736" /></a></div> <h2><strong>Frequently Asked Questions About Progenifix</strong></h2> <p><strong>Q - How often should the Progenifix formula be taken?</strong></p> <p>A - Users must take two capsules daily to get the desired results. The best time of day to use it is in the morning.</p> <p><strong>Q - What does Progenifix taste like?</strong></p> <p>A - Nothing! Even with the plethora of mushrooms, consumers won't have to worry about taste because it is condensed within the capsules.</p> <p><strong>Q - How long will users have to keep using Progenifix?</strong></p> <p>A - Since every person starts at a different place in their weight loss, they also have different paces that they go at in their progress. While the total amount of time the user needs to stick with Progenifix will change, most of the initial progress will be noticeable in the first week. Sticking with this regimen for any time is beneficial, but users who commit to using it for longer will see the most intense changes.</p> <p><strong>Q - Is it possible to purchase Progenifix from a different store?</strong></p> <p>A - No. The creators want to ensure that users can get the best price possible, which is why the only place that consumers can purchase Progenifix is on the official website.</p> <p><strong>Q - What should the user do if they lose weight too quickly?</strong></p> <p>A - This formula is a highly effective remedy, which is why some consumers might grow concerned about how quickly they shed pounds. If this rate of weight loss is overwhelming or even alarming, they can cut the dose to no more than one capsule a day.</p> <p><strong>Q - What is the best number of bottles to order?</strong></p> <p>A - Each bottle contains enough Progenifix formula to last through an entire month, meaning users should purchase the same number of bottles as the number of months they want to use it. Taking the formula for 3-6 months reaps the best rewards, giving users the best price on their order.</p> <p><strong>Q - What if the formula doesn't work for the user?</strong></p> <p>A - If the user finds that the Progenifix formula doesn't help with their weight loss, they can get a full refund with a money-back guarantee within 60 days. A refund can be established with the customer service team before sending back any products.</p> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="https://www.globalfitnessmart.com/get-progenifix"><strong>(EXCLUSIVE OFFER)Click Here : "Progenifix USA"Official Website!</strong></a></span></h2> <h2><strong>Buying a Bottle of Progenifix</strong></h2> <p>The only way consumers can order Progenifix is by visiting the official website . While three options are available for the packages, only consumers who demand the most significant quantity (6 bottles) will get free shipping on their purchase.</p> <p>The available packages include the following:</p> <p><strong>● One bottle for $69</strong></p> <p><strong>● Three bottles for $177 ($59 per bottle)</strong></p> <p><strong>● Six bottles for $294 ($49 per bottle)</strong></p> <h2><strong>Conclusion</strong></h2> <p>In conclusion, Progenifix is a great weight loss supplement that can help you achieve your weight goals with its natural ingredients and superb formulation. It has received favorable reviews from customers who have tried it and reported positive results in terms of energy levels and fat burning capabilities. We highly recommend giving this product a try if you are looking for an effective way to lose weight without any side effects. weight loss supplements.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-progenifix"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkmDXhB5tFDMCVQJ3hv3xh5qTSt7Zo748CAfFLOhbx_TraK6GZdQ1-Qiy-KYQCsezLo5-M1eppklbei6Yp4c-esFrYezEoU_xUIip55gCUkBbLjaZiGgTJBwA2Rg4V5MLllp67NOz_62p-9F1MsEKMIaS-Mynx5QQeoythx6P0GZeNv06E1J2q-PK78kwM/w640-h422/Progenifix%20price.jpg" alt="" width="640" height="422" border="0" data-original-height="413" data-original-width="625" /></a></div> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="https://www.globalfitnessmart.com/get-progenifix"><strong>(EXCLUSIVE OFFER)Click Here : "Progenifix USA"Official Website!</strong></a></span></h2> <h2><strong># Read More</strong></h2> <p><strong><a href="https://progenifix-official.clubeo.com">https://progenifix-official.clubeo.com</a></strong></p> <p><strong><a href="https://progenifix-official.clubeo.com/page/progenifix-is-legit-2023-updated-report.html">https://progenifix-official.clubeo.com/page/progenifix-is-legit-2023-updated-report.html</a></strong></p> <p><strong><a href="https://progenifix-official.clubeo.com/page/progenifix-review-2023-does-it-really-work-for-weight-loss.html">https://progenifix-official.clubeo.com/page/progenifix-review-2023-does-it-really-work-for-weight-loss.html</a></strong></p> <p><strong><a href="https://progenifix-official.clubeo.com/calendar/2023/10/19/progenifix-exposed-reviews-2023-legit-scam-alert">https://progenifix-official.clubeo.com/calendar/2023/10/19/progenifix-exposed-reviews-2023-legit-scam-alert</a></strong></p> <p><strong><a href="https://groups.google.com/g/progenifix-official-us/c/xeAEDlM-BhM">https://groups.google.com/g/progenifix-official-us/c/xeAEDlM-BhM</a></strong></p> <p>&nbsp;</p>
progenifixofficial/progenifix
[ "region:us" ]
2023-10-11T05:11:04+00:00
{}
2023-10-11T05:11:21+00:00
[]
[]
TAGS #region-us
<h2 style="text-align: left;"><a href="URL style="background-color: #ffcc00; color: blue;"><strong>{</strong><strong>Progenifix - Official Website -- Order Now}</strong></span></a></h2> <h2><strong>️<span style="color: #ff9900;">● For Order Official Website - <a href="URL/URL /><strong>️<span style="color: #33cccc;">● Item Name: &mdash; <a href="URL /><strong>️<span style="color: #99cc00;">● Ingredients: &mdash; All Natural</span></strong><br /><strong>️<span style="color: #339966;">● Incidental Effects: &mdash; NA</span></strong><br /><strong>️<span style="color: purple;"><span style="color: red;">● Accessibility: &mdash; <a href="URL /></span></strong></h2> <h2><a href="URL style="background-color: #ffcc00; color: blue;">️Hurry Up - Limited Time Offer - Purchase Now</span></strong></a><br /><a href="URL style="background-color: #ffcc00; color: blue;">️Hurry Up - Limited Time Offer - Purchase Now</span></strong></a><br /><a href="URL style="background-color: #ffcc00; color: blue;">️Hurry Up - Limited Time Offer - Purchase Now</span></strong></a></h2> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="480" border="0" data-original-height="417" data-original-width="556" /></a></div> <h2><strong>Progenifix Weight Loss Supplement: Reviews and Extensive guide 2023 Read Before</strong></h2> <p>Progenifix is a supplement that helps consumers to improve weight loss and support good health. The formula is easy to take every day, though some consumers will notice that they burn through weight more rapidly than they ever have before.</p> <h2><strong>What is Progenifix?</strong></h2> <p>If you are looking to lose weight, you may have heard of Progenifix. This weight loss supplement is designed to help you burn fat and lose weight quickly. But what is Progenifix and how does it work? In this article, we will take a closer look at the Progenifix weight loss supplement and provide an in-depth review.</p> <p>Progenifix is a weight loss supplement that contains a powerful blend of ingredients that are designed to help you burn fat and lose weight quickly. Other ingredients in Progenifix include green coffee bean extract, green tea extract, and African mango seed extract. These ingredients are all known to be effective for weight loss and they work together to help you burn fat quickly.</p> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="URL PROMO: Get Progenifix at the Lowest Discounted Price Online</strong></a></span></h2> <h2><strong>How Does Progenifix Work?</strong></h2> <p>Progenifix is a supplement that utilizes natural, scientifically supported ingredients to enhance weight loss results and improve overall well-being. According to the product's official website, it has three primary benefits, which are:</p> <ul> <li>Supporting weight loss</li> <li>Promoting wellness and vitality</li> <li>Supporting immune system</li> </ul> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="360" border="0" data-original-height="1260" data-original-width="2240" /></a></div> <h2><strong>What is in Progenifix Natural Ingredients</strong></h2> <p>Exclusively offering natural ingredients, the Progenifix formula includes the following mushrooms:</p> <p><strong>#Royal Sun Agaricus</strong></p> <p>Royal Sun Mushrooms are incredibly supportive of the immune system and its response to potential health threats. With a high abundance of immunomodulating polysaccharides, consumers inherently reduce their risk of infection, allergic reactions, and even asthma, according to early research with mice. It can also reduce inflammation, especially for consumers with inflammatory bowel disease.</p> <p><strong>#Cordyceps Sinensis</strong></p> <p>Cordyceps Sinensis is a common mushroom, and traditional healers use it in Sikkim. These experts believe this mushroom can help with all kinds of ailments when treated like a tonic. With the proper preparation, the creators sometimes use it to boost energy, regulate the appetite, and promote better endurance and stamina.</p> <p><strong>#Chaga</strong></p> <p>Based on current evidence, Chaga mushrooms have antioxidants that can soothe arthritis and reduce high blood pressure. This mushroom eases inflammation throughout the body, including the stomach lining and the joints. By dealing with inflammation, consumers can adequately digest their food without pain.</p> <p><strong>#Lion's Mane</strong></p> <p>Lion's Mane Mushrooms are quite a sight to see, and they are just as beneficial as they are interesting to look at. Current research links the use of lion's mane mushrooms to preventing problems like dementia, heart disease, and cancer. When used by animals, this mushroom reduces the risk of diabetes as well.<br />Many of the benefits consumers reap with Lion's Mane Mushroom are in the brain.</p> <p><strong>#White Button</strong></p> <p>The final mushroom of this blend is usually found in the produce section of even the smallest grocery stores &ndash; white button mushrooms. They embody what consumers often think of when they hear "mushrooms," but they can also greatly benefit the user's health.</p> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="URL PROMO[Limited Discount]: "Progenifix USA"Official Website!</strong></a></span></h2> <h2><strong>The Benefits of Progenifix Supplement</strong></h2> <p>● Progenifix supplement helps eliminate excess fat stored in the body</p> <p>● The ingredients in the Progenifix formula are rich in fiber which helps suppress appetite and food cravings</p> <p>● Progenifix is rich in anti-aging compounds which support metabolic functions</p> <p>● Progenifix supplement helps improve energy levels, mood, and mental clarity</p> <p>● The formula assists in strengthening the immune system by reducing inflammation</p> <p>● It has antioxidants that prevent damage from free radicals and oxidative stress</p> <p>● Progenifix formula can treat digestive and circulatory disorders and reduce the symptoms of diabetes</p> <p>● The supplement provides the necessary nutrients the body needs to stay healthy</p> <p>● The supplement targets the root cause of slow metabolism and restores regular metabolic activity</p> <p>● The formula provides better sleep and boosts confidence and self-esteem</p> <h2><strong>How to Use Progenifix</strong></h2> <p>Assuming you are referring to the weight loss supplement known as Progenifix: When using any weight loss supplement, it is important to follow the instructions on the product label. It is also important to consult with a healthcare professional before starting any new supplement, especially if you have any medical conditions or are taking any medications.</p> <p>Progenifix is a dietary supplement that comes in capsule form. The recommended dose is two capsules per day, taken with water. It is best to take the capsules with meals. For best results, it is recommended to use Progenifix for at least 8 weeks.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="266" height="400" border="0" data-original-height="1104" data-original-width="736" /></a></div> <h2><strong>Frequently Asked Questions About Progenifix</strong></h2> <p><strong>Q - How often should the Progenifix formula be taken?</strong></p> <p>A - Users must take two capsules daily to get the desired results. The best time of day to use it is in the morning.</p> <p><strong>Q - What does Progenifix taste like?</strong></p> <p>A - Nothing! Even with the plethora of mushrooms, consumers won't have to worry about taste because it is condensed within the capsules.</p> <p><strong>Q - How long will users have to keep using Progenifix?</strong></p> <p>A - Since every person starts at a different place in their weight loss, they also have different paces that they go at in their progress. While the total amount of time the user needs to stick with Progenifix will change, most of the initial progress will be noticeable in the first week. Sticking with this regimen for any time is beneficial, but users who commit to using it for longer will see the most intense changes.</p> <p><strong>Q - Is it possible to purchase Progenifix from a different store?</strong></p> <p>A - No. The creators want to ensure that users can get the best price possible, which is why the only place that consumers can purchase Progenifix is on the official website.</p> <p><strong>Q - What should the user do if they lose weight too quickly?</strong></p> <p>A - This formula is a highly effective remedy, which is why some consumers might grow concerned about how quickly they shed pounds. If this rate of weight loss is overwhelming or even alarming, they can cut the dose to no more than one capsule a day.</p> <p><strong>Q - What is the best number of bottles to order?</strong></p> <p>A - Each bottle contains enough Progenifix formula to last through an entire month, meaning users should purchase the same number of bottles as the number of months they want to use it. Taking the formula for 3-6 months reaps the best rewards, giving users the best price on their order.</p> <p><strong>Q - What if the formula doesn't work for the user?</strong></p> <p>A - If the user finds that the Progenifix formula doesn't help with their weight loss, they can get a full refund with a money-back guarantee within 60 days. A refund can be established with the customer service team before sending back any products.</p> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="URL OFFER)Click Here : "Progenifix USA"Official Website!</strong></a></span></h2> <h2><strong>Buying a Bottle of Progenifix</strong></h2> <p>The only way consumers can order Progenifix is by visiting the official website . While three options are available for the packages, only consumers who demand the most significant quantity (6 bottles) will get free shipping on their purchase.</p> <p>The available packages include the following:</p> <p><strong>● One bottle for $69</strong></p> <p><strong>● Three bottles for $177 ($59 per bottle)</strong></p> <p><strong>● Six bottles for $294 ($49 per bottle)</strong></p> <h2><strong>Conclusion</strong></h2> <p>In conclusion, Progenifix is a great weight loss supplement that can help you achieve your weight goals with its natural ingredients and superb formulation. It has received favorable reviews from customers who have tried it and reported positive results in terms of energy levels and fat burning capabilities. We highly recommend giving this product a try if you are looking for an effective way to lose weight without any side effects. weight loss supplements.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="422" border="0" data-original-height="413" data-original-width="625" /></a></div> <h2 style="text-align: center;"><span style="background-color: #ffcc00; color: #0000ff;"><a style="background-color: #ffcc00; color: #0000ff;" href="URL OFFER)Click Here : "Progenifix USA"Official Website!</strong></a></span></h2> <h2><strong># Read More</strong></h2> <p><strong><a href="URL">URL</a></strong></p> <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p>&nbsp;</p>
[ "# Read More</strong></h2>\n<p><strong><a href=\"URL\">URL</a></strong></p>\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p>&nbsp;</p>" ]
[ "TAGS\n#region-us \n", "# Read More</strong></h2>\n<p><strong><a href=\"URL\">URL</a></strong></p>\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p>&nbsp;</p>" ]
[ 6, 92 ]
[ "passage: TAGS\n#region-us \n# Read More</strong></h2>\n<p><strong><a href=\"URL\">URL</a></strong></p>\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p>&nbsp;</p>" ]
f638cd52f8918812d1bcb60ee2a4e7584c07012c
# Dataset Card for "vary_datset1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tanvirsrbd1/vary_datset1
[ "region:us" ]
2023-10-11T05:14:42+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1499883, "num_examples": 2980}], "download_size": 486457, "dataset_size": 1499883}}
2023-10-11T05:14:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vary_datset1" More Information needed
[ "# Dataset Card for \"vary_datset1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vary_datset1\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vary_datset1\"\n\nMore Information needed" ]
89046221533a9a48fdc1f61716b376a5667f6247
# Dataset Card for SuperDialseg ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/HuiyuanXie/tiage](https://github.com/HuiyuanXie/tiage) - **Repository:** [https://github.com/HuiyuanXie/tiage](https://github.com/HuiyuanXie/tiage) - **Paper:** TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages: English ## Dataset Structure ### Data Instances ``` { "dial_data": { "tiage": [ { "dial_id": "tiage_dial_001", "turns": [ { "da": "", "role": "user", "turn_id": 0, "utterance": "hello , how are you doing tonight ?", "topic_id": 0, "segmentation_label": 0 }, ... { "da": "", "role": "user", "turn_id": 15, "utterance": "i bet it is oh i could not", "topic_id": 4, "segmentation_label": 1 } ], ... } ] } ``` ### Data Fields #### Dialogue-Level + `dial_id`: ID of a dialogue; + `turns`: All utterances of a dialogue. #### Utterance-Level + `da`: Dialogue Act annotation (here is Null); + `role`: Role annotation (here is user/agent/user/agent... in default); + `turn_id`: ID of an utterance; + `utterance`: Text of the utterance; + `topic_id`: ID (order) of the current topic; + `segmentation_label`: 1: it is the end of a topic; 0: others. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT ### Citation Information ``` @article{xie2021tiage, title={TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling}, author={Xie, Huiyuan and Liu, Zhenghao and Xiong, Chenyan and Liu, Zhiyuan and Copestake, Ann}, journal={arXiv preprint arXiv:2109.04562}, year={2021} } ``` ### Contributions + Thanks to [@HuiyuanXie](https://github.com/HuiyuanXie/) for collecting this dataset. + Thanks to [@Coldog2333](https://github.com/Coldog2333) for adding this dataset.
Coldog2333/tiage
[ "size_categories:n<1K", "language:en", "license:mit", "dialogue segmentation", "region:us" ]
2023-10-11T05:15:49+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "tags": ["dialogue segmentation"]}
2023-10-11T05:27:05+00:00
[]
[ "en" ]
TAGS #size_categories-n<1K #language-English #license-mit #dialogue segmentation #region-us
# Dataset Card for SuperDialseg ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages: English ## Dataset Structure ### Data Instances ### Data Fields #### Dialogue-Level + 'dial_id': ID of a dialogue; + 'turns': All utterances of a dialogue. #### Utterance-Level + 'da': Dialogue Act annotation (here is Null); + 'role': Role annotation (here is user/agent/user/agent... in default); + 'turn_id': ID of an utterance; + 'utterance': Text of the utterance; + 'topic_id': ID (order) of the current topic; + 'segmentation_label': 1: it is the end of a topic; 0: others. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information MIT ### Contributions + Thanks to @HuiyuanXie for collecting this dataset. + Thanks to @Coldog2333 for adding this dataset.
[ "# Dataset Card for SuperDialseg", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages: English", "## Dataset Structure", "### Data Instances", "### Data Fields", "#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.", "#### Utterance-Level\n+ 'da': Dialogue Act annotation (here is Null);\n+ 'role': Role annotation (here is user/agent/user/agent... in default);\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nMIT", "### Contributions\n\n+ Thanks to @HuiyuanXie for collecting this dataset.\n+ Thanks to @Coldog2333 for adding this dataset." ]
[ "TAGS\n#size_categories-n<1K #language-English #license-mit #dialogue segmentation #region-us \n", "# Dataset Card for SuperDialseg", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages: English", "## Dataset Structure", "### Data Instances", "### Data Fields", "#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.", "#### Utterance-Level\n+ 'da': Dialogue Act annotation (here is Null);\n+ 'role': Role annotation (here is user/agent/user/agent... in default);\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nMIT", "### Contributions\n\n+ Thanks to @HuiyuanXie for collecting this dataset.\n+ Thanks to @Coldog2333 for adding this dataset." ]
[ 31, 9, 125, 43, 6, 10, 6, 6, 6, 5, 34, 115, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 7, 35 ]
[ "passage: TAGS\n#size_categories-n<1K #language-English #license-mit #dialogue segmentation #region-us \n# Dataset Card for SuperDialseg## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling\n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages: English## Dataset Structure### Data Instances### Data Fields#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.#### Utterance-Level\n+ 'da': Dialogue Act annotation (here is Null);\n+ 'role': Role annotation (here is user/agent/user/agent... in default);\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators" ]
1b0591d4a01f56ce461a3286cc3cc7c2f1e80c3d
# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-orca-7b-v1.0 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/uukuguy/speechless-code-mistral-orca-7b-v1.0 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [uukuguy/speechless-code-mistral-orca-7b-v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-orca-7b-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-code-mistral-orca-7b-v1.0", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T15:07:12.352820](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-code-mistral-orca-7b-v1.0/blob/main/results_2023-10-24T15-07-12.352820.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.4526006711409396, "em_stderr": 0.005097407791242309, "f1": 0.4989010067114103, "f1_stderr": 0.004905672332696013, "acc": 0.42884877867222604, "acc_stderr": 0.009659566392137438 }, "harness|drop|3": { "em": 0.4526006711409396, "em_stderr": 0.005097407791242309, "f1": 0.4989010067114103, "f1_stderr": 0.004905672332696013 }, "harness|gsm8k|5": { "acc": 0.08263836239575435, "acc_stderr": 0.0075840892201481476 }, "harness|winogrande|5": { "acc": 0.7750591949486977, "acc_stderr": 0.01173504356412673 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_uukuguy__speechless-code-mistral-orca-7b-v1.0
[ "region:us" ]
2023-10-11T05:18:03+00:00
{"pretty_name": "Evaluation run of uukuguy/speechless-code-mistral-orca-7b-v1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-code-mistral-orca-7b-v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-orca-7b-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-code-mistral-orca-7b-v1.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T15:07:12.352820](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-code-mistral-orca-7b-v1.0/blob/main/results_2023-10-24T15-07-12.352820.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4526006711409396,\n \"em_stderr\": 0.005097407791242309,\n \"f1\": 0.4989010067114103,\n \"f1_stderr\": 0.004905672332696013,\n \"acc\": 0.42884877867222604,\n \"acc_stderr\": 0.009659566392137438\n },\n \"harness|drop|3\": {\n \"em\": 0.4526006711409396,\n \"em_stderr\": 0.005097407791242309,\n \"f1\": 0.4989010067114103,\n \"f1_stderr\": 0.004905672332696013\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08263836239575435,\n \"acc_stderr\": 0.0075840892201481476\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7750591949486977,\n \"acc_stderr\": 0.01173504356412673\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-code-mistral-orca-7b-v1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|arc:challenge|25_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T15_07_12.352820", "path": ["**/details_harness|drop|3_2023-10-24T15-07-12.352820.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T15-07-12.352820.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T15_07_12.352820", "path": ["**/details_harness|gsm8k|5_2023-10-24T15-07-12.352820.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T15-07-12.352820.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hellaswag|10_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T06-17-39.611971.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T06-17-39.611971.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T06-17-39.611971.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T15_07_12.352820", "path": ["**/details_harness|winogrande|5_2023-10-24T15-07-12.352820.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T15-07-12.352820.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T06_17_39.611971", "path": ["results_2023-10-11T06-17-39.611971.parquet"]}, {"split": "2023_10_24T15_07_12.352820", "path": ["results_2023-10-24T15-07-12.352820.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T15-07-12.352820.parquet"]}]}]}
2023-10-24T14:07:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-orca-7b-v1.0 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-orca-7b-v1.0 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T15:07:12.352820(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-orca-7b-v1.0", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-orca-7b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T15:07:12.352820(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-orca-7b-v1.0", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-orca-7b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T15:07:12.352820(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 30, 31, 178, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-orca-7b-v1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-orca-7b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T15:07:12.352820(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
5c2f4b525e2c9920f8e0f8679f7e782706d971bf
# Dataset Card for "vary_merged_dataset1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tanvirsrbd1/vary_merged_dataset1
[ "region:us" ]
2023-10-11T05:25:25+00:00
{"dataset_info": {"features": [{"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3337766, "num_examples": 5960}], "download_size": 1093625, "dataset_size": 3337766}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T05:25:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vary_merged_dataset1" More Information needed
[ "# Dataset Card for \"vary_merged_dataset1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vary_merged_dataset1\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vary_merged_dataset1\"\n\nMore Information needed" ]
b245f76c3ffb2a73ab53b4328883f11fe4d092f1
### Dataset Description Original source: https://www.deepmind.com/open-source/kinetics ## innat/KineticsTop5 A small set from Kinetics-400. It contains 5 classes. ```bash {0: 'opening_bottle', 1: 'squat', 2: 'reading_book', 3: 'sneezing', 4: 'reading_newspaper'} ``` - kinetics_top5.zip: No internal data drop. - kinetics_top5_tiny.zip: Internal data drop.
innat/KineticsTop5
[ "task_categories:video-classification", "size_categories:100M<n<1B", "license:apache-2.0", "region:us" ]
2023-10-11T05:29:40+00:00
{"license": "apache-2.0", "size_categories": ["100M<n<1B"], "task_categories": ["video-classification"]}
2023-10-11T12:43:12+00:00
[]
[]
TAGS #task_categories-video-classification #size_categories-100M<n<1B #license-apache-2.0 #region-us
### Dataset Description Original source: URL ## innat/KineticsTop5 A small set from Kinetics-400. It contains 5 classes. - kinetics_top5.zip: No internal data drop. - kinetics_top5_tiny.zip: Internal data drop.
[ "### Dataset Description\n\nOriginal source: URL", "## innat/KineticsTop5\n\nA small set from Kinetics-400. It contains 5 classes. \n\n\n\n- kinetics_top5.zip: No internal data drop.\n- kinetics_top5_tiny.zip: Internal data drop." ]
[ "TAGS\n#task_categories-video-classification #size_categories-100M<n<1B #license-apache-2.0 #region-us \n", "### Dataset Description\n\nOriginal source: URL", "## innat/KineticsTop5\n\nA small set from Kinetics-400. It contains 5 classes. \n\n\n\n- kinetics_top5.zip: No internal data drop.\n- kinetics_top5_tiny.zip: Internal data drop." ]
[ 37, 9, 55 ]
[ "passage: TAGS\n#task_categories-video-classification #size_categories-100M<n<1B #license-apache-2.0 #region-us \n### Dataset Description\n\nOriginal source: URL## innat/KineticsTop5\n\nA small set from Kinetics-400. It contains 5 classes. \n\n\n\n- kinetics_top5.zip: No internal data drop.\n- kinetics_top5_tiny.zip: Internal data drop." ]
661e4b114f42671860f020f42f5c8c078ca1a7f9
# Dataset Card for SuperDialseg ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/xyease/TADAM](https://github.com/xyease/TADAM) - **Repository:** [https://github.com/xyease/TADAM](https://github.com/xyease/TADAM) - **Paper:** Topic-aware multi-turn dialogue modeling - **Leaderboard:** - **Point of Contact:** [email protected] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages: English ## Dataset Structure ### Data Instances ``` { "dial_data": { "dialseg711": [ { "dial_id": "dialseg711_dial_000", "turns": [ { "da": "", "role": "user", "turn_id": 0, "utterance": "check the weather for the 7 day forecast", "topic_id": 0, "segmentation_label": 0 }, ... { "da": "", "role": "agent", "turn_id": 23, "utterance": "Reminder set for your meeting at 11am on the 13th with management to discuss your company picnic. Is there anything else?", "topic_id": 4, "segmentation_label": 1 } ], ... } ] } ``` ### Data Fields #### Dialogue-Level + `dial_id`: ID of a dialogue; + `turns`: All utterances of a dialogue. #### Utterance-Level + `da`: Dialogue Act annotation derived from the original DGDS dataset; + `role`: Role annotation derived from the original DGDS dataset; + `turn_id`: ID of an utterance; + `utterance`: Text of the utterance; + `topic_id`: ID (order) of the current topic; + `segmentation_label`: 1: it is the end of a topic; 0: others. ### Data Splits Test only ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License ### Citation Information @article{xu2020topic, title={Topic-aware multi-turn dialogue modeling}, author={Xu, Yi and Zhao, Hai and Zhang, Zhuosheng}, journal={arXiv preprint arXiv:2009.12539}, year={2020} } ### Contributions + Thanks to [@xyease](https://github.com/xyease) for constructing this dataset. + Thanks to [@Coldog2333](https://github.com/Coldog2333) for adding this dataset.
Coldog2333/dialseg711
[ "size_categories:n<1K", "language:en", "license:mit", "dialogue segmentation", "region:us" ]
2023-10-11T05:36:02+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "tags": ["dialogue segmentation"]}
2023-10-11T05:41:09+00:00
[]
[ "en" ]
TAGS #size_categories-n<1K #language-English #license-mit #dialogue segmentation #region-us
# Dataset Card for SuperDialseg ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: Topic-aware multi-turn dialogue modeling - Leaderboard: - Point of Contact: [email protected] ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages: English ## Dataset Structure ### Data Instances ### Data Fields #### Dialogue-Level + 'dial_id': ID of a dialogue; + 'turns': All utterances of a dialogue. #### Utterance-Level + 'da': Dialogue Act annotation derived from the original DGDS dataset; + 'role': Role annotation derived from the original DGDS dataset; + 'turn_id': ID of an utterance; + 'utterance': Text of the utterance; + 'topic_id': ID (order) of the current topic; + 'segmentation_label': 1: it is the end of a topic; 0: others. ### Data Splits Test only ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information MIT License @article{xu2020topic, title={Topic-aware multi-turn dialogue modeling}, author={Xu, Yi and Zhao, Hai and Zhang, Zhuosheng}, journal={arXiv preprint arXiv:2009.12539}, year={2020} } ### Contributions + Thanks to @xyease for constructing this dataset. + Thanks to @Coldog2333 for adding this dataset.
[ "# Dataset Card for SuperDialseg", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Topic-aware multi-turn dialogue modeling\n- Leaderboard: \n- Point of Contact: [email protected]", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages: English", "## Dataset Structure", "### Data Instances", "### Data Fields", "#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.", "#### Utterance-Level\n+ 'da': Dialogue Act annotation derived from the original DGDS dataset;\n+ 'role': Role annotation derived from the original DGDS dataset;\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.", "### Data Splits\n\nTest only", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nMIT License\n\n\n\n@article{xu2020topic,\n title={Topic-aware multi-turn dialogue modeling},\n author={Xu, Yi and Zhao, Hai and Zhang, Zhuosheng},\n journal={arXiv preprint arXiv:2009.12539},\n year={2020}\n}", "### Contributions\n\n+ Thanks to @xyease for constructing this dataset.\n+ Thanks to @Coldog2333 for adding this dataset." ]
[ "TAGS\n#size_categories-n<1K #language-English #license-mit #dialogue segmentation #region-us \n", "# Dataset Card for SuperDialseg", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Topic-aware multi-turn dialogue modeling\n- Leaderboard: \n- Point of Contact: [email protected]", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages: English", "## Dataset Structure", "### Data Instances", "### Data Fields", "#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.", "#### Utterance-Level\n+ 'da': Dialogue Act annotation derived from the original DGDS dataset;\n+ 'role': Role annotation derived from the original DGDS dataset;\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.", "### Data Splits\n\nTest only", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nMIT License\n\n\n\n@article{xu2020topic,\n title={Topic-aware multi-turn dialogue modeling},\n author={Xu, Yi and Zhao, Hai and Zhang, Zhuosheng},\n journal={arXiv preprint arXiv:2009.12539},\n year={2020}\n}", "### Contributions\n\n+ Thanks to @xyease for constructing this dataset.\n+ Thanks to @Coldog2333 for adding this dataset." ]
[ 31, 9, 125, 46, 6, 10, 6, 6, 6, 5, 34, 117, 7, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 79, 33 ]
[ "passage: TAGS\n#size_categories-n<1K #language-English #license-mit #dialogue segmentation #region-us \n# Dataset Card for SuperDialseg## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Topic-aware multi-turn dialogue modeling\n- Leaderboard: \n- Point of Contact: [email protected]### Dataset Summary### Supported Tasks and Leaderboards### Languages: English## Dataset Structure### Data Instances### Data Fields#### Dialogue-Level\n+ 'dial_id': ID of a dialogue;\n+ 'turns': All utterances of a dialogue.#### Utterance-Level\n+ 'da': Dialogue Act annotation derived from the original DGDS dataset;\n+ 'role': Role annotation derived from the original DGDS dataset;\n+ 'turn_id': ID of an utterance;\n+ 'utterance': Text of the utterance;\n+ 'topic_id': ID (order) of the current topic;\n+ 'segmentation_label': 1: it is the end of a topic; 0: others.### Data Splits\n\nTest only## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information" ]
71cbd95d5b0d3593f5a2b240638c5eeebcbcb496
# Dataset Card for "vary_merge_dataset_filter_number1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tanvirsrbd1/vary_merge_dataset_filter_number1
[ "region:us" ]
2023-10-11T05:45:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3301248, "num_examples": 5960}], "download_size": 1069863, "dataset_size": 3301248}}
2023-10-11T05:45:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vary_merge_dataset_filter_number1" More Information needed
[ "# Dataset Card for \"vary_merge_dataset_filter_number1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vary_merge_dataset_filter_number1\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vary_merge_dataset_filter_number1\"\n\nMore Information needed" ]
d995098f2e8c4a5f0db9603478816a0a7b384734
# Dataset Card for "plmn1.5l" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
csupiisc/plmn1.5l
[ "region:us" ]
2023-10-11T05:48:15+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 754298, "num_examples": 10000}], "download_size": 299510, "dataset_size": 754298}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T16:56:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "plmn1.5l" More Information needed
[ "# Dataset Card for \"plmn1.5l\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"plmn1.5l\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"plmn1.5l\"\n\nMore Information needed" ]
51bb6e6c9d14510149d6a1a1cc0c47e1f0b3ab8d
# Dataset Card for "truyenfull_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tinhpx2911/truyenfull_processed
[ "region:us" ]
2023-10-11T05:52:59+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9414309891, "num_examples": 665475}], "download_size": 2167644522, "dataset_size": 9414309891}}
2023-10-11T06:33:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "truyenfull_processed" More Information needed
[ "# Dataset Card for \"truyenfull_processed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"truyenfull_processed\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"truyenfull_processed\"\n\nMore Information needed" ]
73c03a00a09268e045aee67d60bb717f27302223
task_categories: -Paragraph Selection -Span selection
vanessa0688/ADL2023HW1
[ "size_categories:100K<n<1M", "language:zh", "license:apache-2.0", "region:us" ]
2023-10-11T06:03:44+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["100K<n<1M"]}
2023-10-11T07:06:41+00:00
[]
[ "zh" ]
TAGS #size_categories-100K<n<1M #language-Chinese #license-apache-2.0 #region-us
task_categories: -Paragraph Selection -Span selection
[]
[ "TAGS\n#size_categories-100K<n<1M #language-Chinese #license-apache-2.0 #region-us \n" ]
[ 31 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-Chinese #license-apache-2.0 #region-us \n" ]
21ea01f21564b839b28d77a5697ac8209de411b7
# Dataset Card for Evaluation run of ehartford/dolphin-2.1-mistral-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/ehartford/dolphin-2.1-mistral-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-28T06:17:12.096857](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b/blob/main/results_2023-10-28T06-17-12.096857.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0025167785234899327, "em_stderr": 0.0005131152834514602, "f1": 0.07557885906040251, "f1_stderr": 0.0015806922251337756, "acc": 0.49258006202828786, "acc_stderr": 0.011432753263209281 }, "harness|drop|3": { "em": 0.0025167785234899327, "em_stderr": 0.0005131152834514602, "f1": 0.07557885906040251, "f1_stderr": 0.0015806922251337756 }, "harness|gsm8k|5": { "acc": 0.20773313115996966, "acc_stderr": 0.011174572716705898 }, "harness|winogrande|5": { "acc": 0.7774269928966061, "acc_stderr": 0.011690933809712662 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b
[ "region:us" ]
2023-10-11T06:08:34+00:00
{"pretty_name": "Evaluation run of ehartford/dolphin-2.1-mistral-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T06:17:12.096857](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b/blob/main/results_2023-10-28T06-17-12.096857.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0025167785234899327,\n \"em_stderr\": 0.0005131152834514602,\n \"f1\": 0.07557885906040251,\n \"f1_stderr\": 0.0015806922251337756,\n \"acc\": 0.49258006202828786,\n \"acc_stderr\": 0.011432753263209281\n },\n \"harness|drop|3\": {\n \"em\": 0.0025167785234899327,\n \"em_stderr\": 0.0005131152834514602,\n \"f1\": 0.07557885906040251,\n \"f1_stderr\": 0.0015806922251337756\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20773313115996966,\n \"acc_stderr\": 0.011174572716705898\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7774269928966061,\n \"acc_stderr\": 0.011690933809712662\n }\n}\n```", "repo_url": "https://huggingface.co/ehartford/dolphin-2.1-mistral-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|arc:challenge|25_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|arc:challenge|25_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T09_35_25.636267", "path": ["**/details_harness|drop|3_2023-10-26T09-35-25.636267.parquet"]}, {"split": "2023_10_28T06_17_12.096857", "path": ["**/details_harness|drop|3_2023-10-28T06-17-12.096857.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T06-17-12.096857.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T09_35_25.636267", "path": ["**/details_harness|gsm8k|5_2023-10-26T09-35-25.636267.parquet"]}, {"split": "2023_10_28T06_17_12.096857", "path": ["**/details_harness|gsm8k|5_2023-10-28T06-17-12.096857.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T06-17-12.096857.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hellaswag|10_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hellaswag|10_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T07-08-11.393844.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T07-16-54.692993.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T07-16-54.692993.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T07-16-54.692993.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T09_35_25.636267", "path": ["**/details_harness|winogrande|5_2023-10-26T09-35-25.636267.parquet"]}, {"split": "2023_10_28T06_17_12.096857", "path": ["**/details_harness|winogrande|5_2023-10-28T06-17-12.096857.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T06-17-12.096857.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T07_08_11.393844", "path": ["results_2023-10-11T07-08-11.393844.parquet"]}, {"split": "2023_10_11T07_16_54.692993", "path": ["results_2023-10-11T07-16-54.692993.parquet"]}, {"split": "2023_10_26T09_35_25.636267", "path": ["results_2023-10-26T09-35-25.636267.parquet"]}, {"split": "2023_10_28T06_17_12.096857", "path": ["results_2023-10-28T06-17-12.096857.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T06-17-12.096857.parquet"]}]}]}
2023-10-28T05:17:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ehartford/dolphin-2.1-mistral-7b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model ehartford/dolphin-2.1-mistral-7b on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-28T06:17:12.096857(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of ehartford/dolphin-2.1-mistral-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ehartford/dolphin-2.1-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T06:17:12.096857(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ehartford/dolphin-2.1-mistral-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ehartford/dolphin-2.1-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T06:17:12.096857(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 23, 31, 171, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ehartford/dolphin-2.1-mistral-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ehartford/dolphin-2.1-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T06:17:12.096857(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
7aec023404cecbe662824a11cff9f18c3de18b7e
# Dataset Card for "jy_finetune_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dhkim123/jy_finetune_test
[ "region:us" ]
2023-10-11T06:08:51+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28604.0, "num_examples": 1}], "download_size": 29365, "dataset_size": 28604.0}}
2023-10-11T06:18:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "jy_finetune_test" More Information needed
[ "# Dataset Card for \"jy_finetune_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"jy_finetune_test\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"jy_finetune_test\"\n\nMore Information needed" ]
d9c382689ad79e212b44743ce7281a2881a2d246
# Dataset Card for "common_voice_13_0-ja-whisper-tiny" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CWKSC/common_voice_13_0-ja-whisper-tiny
[ "region:us" ]
2023-10-11T06:17:08+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 11557295928, "num_examples": 12032}, {"name": "test", "num_bytes": 4765120552, "num_examples": 4961}], "download_size": 0, "dataset_size": 16322416480}}
2023-10-11T07:17:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "common_voice_13_0-ja-whisper-tiny" More Information needed
[ "# Dataset Card for \"common_voice_13_0-ja-whisper-tiny\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"common_voice_13_0-ja-whisper-tiny\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"common_voice_13_0-ja-whisper-tiny\"\n\nMore Information needed" ]
5eec7c6b9c4d16c4b259f0fa1f8999c301770be4
# Dataset Card for Evaluation run of AA051610/VA ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/AA051610/VA - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [AA051610/VA](https://huggingface.co/AA051610/VA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_AA051610__VA", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-10-11T07:22:26.417131](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__VA/blob/main/results_2023-10-11T07-22-26.417131.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.4972405415581996, "acc_stderr": 0.03512578000813228, "acc_norm": 0.5002960487991649, "acc_norm_stderr": 0.03512615731416433, "mc1": 0.28518971848225216, "mc1_stderr": 0.015805827874454892, "mc2": 0.44928868954080875, "mc2_stderr": 0.014916546411376396 }, "harness|arc:challenge|25": { "acc": 0.3848122866894198, "acc_stderr": 0.014218371065251105, "acc_norm": 0.4138225255972696, "acc_norm_stderr": 0.014392730009221007 }, "harness|hellaswag|10": { "acc": 0.47390957976498704, "acc_stderr": 0.004982983592459198, "acc_norm": 0.6251742680740888, "acc_norm_stderr": 0.004830885704380092 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04292596718256981, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04292596718256981 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.45394736842105265, "acc_stderr": 0.04051646342874142, "acc_norm": 0.45394736842105265, "acc_norm_stderr": 0.04051646342874142 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5547169811320755, "acc_stderr": 0.030588052974270658, "acc_norm": 0.5547169811320755, "acc_norm_stderr": 0.030588052974270658 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5069444444444444, "acc_stderr": 0.04180806750294938, "acc_norm": 0.5069444444444444, "acc_norm_stderr": 0.04180806750294938 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.24, "acc_stderr": 0.04292346959909284, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909284 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5144508670520231, "acc_stderr": 0.03810871630454764, "acc_norm": 0.5144508670520231, "acc_norm_stderr": 0.03810871630454764 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.18627450980392157, "acc_stderr": 0.03873958714149351, "acc_norm": 0.18627450980392157, "acc_norm_stderr": 0.03873958714149351 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.63, "acc_stderr": 0.048523658709390974, "acc_norm": 0.63, "acc_norm_stderr": 0.048523658709390974 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4553191489361702, "acc_stderr": 0.032555253593403555, "acc_norm": 0.4553191489361702, "acc_norm_stderr": 0.032555253593403555 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.34210526315789475, "acc_stderr": 0.04462917535336936, "acc_norm": 0.34210526315789475, "acc_norm_stderr": 0.04462917535336936 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5103448275862069, "acc_stderr": 0.04165774775728762, "acc_norm": 0.5103448275862069, "acc_norm_stderr": 0.04165774775728762 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.328042328042328, "acc_stderr": 0.024180497164376896, "acc_norm": 0.328042328042328, "acc_norm_stderr": 0.024180497164376896 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2698412698412698, "acc_stderr": 0.03970158273235172, "acc_norm": 0.2698412698412698, "acc_norm_stderr": 0.03970158273235172 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.567741935483871, "acc_stderr": 0.028181739720019416, "acc_norm": 0.567741935483871, "acc_norm_stderr": 0.028181739720019416 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.35960591133004927, "acc_stderr": 0.033764582465095665, "acc_norm": 0.35960591133004927, "acc_norm_stderr": 0.033764582465095665 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6121212121212121, "acc_stderr": 0.038049136539710114, "acc_norm": 0.6121212121212121, "acc_norm_stderr": 0.038049136539710114 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.5858585858585859, "acc_stderr": 0.03509438348879629, "acc_norm": 0.5858585858585859, "acc_norm_stderr": 0.03509438348879629 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.6269430051813472, "acc_stderr": 0.03490205592048573, "acc_norm": 0.6269430051813472, "acc_norm_stderr": 0.03490205592048573 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.44871794871794873, "acc_stderr": 0.025217315184846475, "acc_norm": 0.44871794871794873, "acc_norm_stderr": 0.025217315184846475 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26296296296296295, "acc_stderr": 0.02684205787383371, "acc_norm": 0.26296296296296295, "acc_norm_stderr": 0.02684205787383371 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5252100840336135, "acc_stderr": 0.03243718055137411, "acc_norm": 0.5252100840336135, "acc_norm_stderr": 0.03243718055137411 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.03879687024073327, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.03879687024073327 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.6128440366972477, "acc_stderr": 0.02088423199264345, "acc_norm": 0.6128440366972477, "acc_norm_stderr": 0.02088423199264345 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.39814814814814814, "acc_stderr": 0.033384734032074016, "acc_norm": 0.39814814814814814, "acc_norm_stderr": 0.033384734032074016 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6372549019607843, "acc_stderr": 0.03374499356319355, "acc_norm": 0.6372549019607843, "acc_norm_stderr": 0.03374499356319355 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.70042194092827, "acc_stderr": 0.029818024749753095, "acc_norm": 0.70042194092827, "acc_norm_stderr": 0.029818024749753095 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6188340807174888, "acc_stderr": 0.03259625118416827, "acc_norm": 0.6188340807174888, "acc_norm_stderr": 0.03259625118416827 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5648854961832062, "acc_stderr": 0.04348208051644858, "acc_norm": 0.5648854961832062, "acc_norm_stderr": 0.04348208051644858 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6694214876033058, "acc_stderr": 0.04294340845212093, "acc_norm": 0.6694214876033058, "acc_norm_stderr": 0.04294340845212093 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6574074074074074, "acc_stderr": 0.045879047413018105, "acc_norm": 0.6574074074074074, "acc_norm_stderr": 0.045879047413018105 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5153374233128835, "acc_stderr": 0.03926522378708843, "acc_norm": 0.5153374233128835, "acc_norm_stderr": 0.03926522378708843 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3482142857142857, "acc_stderr": 0.04521829902833586, "acc_norm": 0.3482142857142857, "acc_norm_stderr": 0.04521829902833586 }, "harness|hendrycksTest-management|5": { "acc": 0.6504854368932039, "acc_stderr": 0.047211885060971716, "acc_norm": 0.6504854368932039, "acc_norm_stderr": 0.047211885060971716 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7222222222222222, "acc_stderr": 0.029343114798094462, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.029343114798094462 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.56, "acc_stderr": 0.0498887651569859, "acc_norm": 0.56, "acc_norm_stderr": 0.0498887651569859 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.6781609195402298, "acc_stderr": 0.0167063814150579, "acc_norm": 0.6781609195402298, "acc_norm_stderr": 0.0167063814150579 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5751445086705202, "acc_stderr": 0.02661335084026174, "acc_norm": 0.5751445086705202, "acc_norm_stderr": 0.02661335084026174 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.25251396648044694, "acc_stderr": 0.014530330201468628, "acc_norm": 0.25251396648044694, "acc_norm_stderr": 0.014530330201468628 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.4934640522875817, "acc_stderr": 0.028627470550556054, "acc_norm": 0.4934640522875817, "acc_norm_stderr": 0.028627470550556054 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5594855305466238, "acc_stderr": 0.028196400574197426, "acc_norm": 0.5594855305466238, "acc_norm_stderr": 0.028196400574197426 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5370370370370371, "acc_stderr": 0.02774431344337654, "acc_norm": 0.5370370370370371, "acc_norm_stderr": 0.02774431344337654 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.43617021276595747, "acc_stderr": 0.029583452036284073, "acc_norm": 0.43617021276595747, "acc_norm_stderr": 0.029583452036284073 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.44198174706649285, "acc_stderr": 0.012683972513598806, "acc_norm": 0.44198174706649285, "acc_norm_stderr": 0.012683972513598806 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4852941176470588, "acc_stderr": 0.03035969707904612, "acc_norm": 0.4852941176470588, "acc_norm_stderr": 0.03035969707904612 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5343137254901961, "acc_stderr": 0.02018014484330729, "acc_norm": 0.5343137254901961, "acc_norm_stderr": 0.02018014484330729 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.04673752333670237, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.04673752333670237 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5673469387755102, "acc_stderr": 0.031717528240626645, "acc_norm": 0.5673469387755102, "acc_norm_stderr": 0.031717528240626645 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6467661691542289, "acc_stderr": 0.03379790611796777, "acc_norm": 0.6467661691542289, "acc_norm_stderr": 0.03379790611796777 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.72, "acc_stderr": 0.04512608598542127, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.038899512528272166, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.038899512528272166 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.6081871345029239, "acc_stderr": 0.03743979825926401, "acc_norm": 0.6081871345029239, "acc_norm_stderr": 0.03743979825926401 }, "harness|truthfulqa:mc|0": { "mc1": 0.28518971848225216, "mc1_stderr": 0.015805827874454892, "mc2": 0.44928868954080875, "mc2_stderr": 0.014916546411376396 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_AA051610__VA
[ "region:us" ]
2023-10-11T06:22:48+00:00
{"pretty_name": "Evaluation run of AA051610/VA", "dataset_summary": "Dataset automatically created during the evaluation run of model [AA051610/VA](https://huggingface.co/AA051610/VA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051610__VA\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T07:22:26.417131](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__VA/blob/main/results_2023-10-11T07-22-26.417131.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4972405415581996,\n \"acc_stderr\": 0.03512578000813228,\n \"acc_norm\": 0.5002960487991649,\n \"acc_norm_stderr\": 0.03512615731416433,\n \"mc1\": 0.28518971848225216,\n \"mc1_stderr\": 0.015805827874454892,\n \"mc2\": 0.44928868954080875,\n \"mc2_stderr\": 0.014916546411376396\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3848122866894198,\n \"acc_stderr\": 0.014218371065251105,\n \"acc_norm\": 0.4138225255972696,\n \"acc_norm_stderr\": 0.014392730009221007\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.47390957976498704,\n \"acc_stderr\": 0.004982983592459198,\n \"acc_norm\": 0.6251742680740888,\n \"acc_norm_stderr\": 0.004830885704380092\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.45394736842105265,\n \"acc_stderr\": 0.04051646342874142,\n \"acc_norm\": 0.45394736842105265,\n \"acc_norm_stderr\": 0.04051646342874142\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5547169811320755,\n \"acc_stderr\": 0.030588052974270658,\n \"acc_norm\": 0.5547169811320755,\n \"acc_norm_stderr\": 0.030588052974270658\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5069444444444444,\n \"acc_stderr\": 0.04180806750294938,\n \"acc_norm\": 0.5069444444444444,\n \"acc_norm_stderr\": 0.04180806750294938\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5144508670520231,\n \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.5144508670520231,\n \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.18627450980392157,\n \"acc_stderr\": 0.03873958714149351,\n \"acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.03873958714149351\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709390974,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709390974\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4553191489361702,\n \"acc_stderr\": 0.032555253593403555,\n \"acc_norm\": 0.4553191489361702,\n \"acc_norm_stderr\": 0.032555253593403555\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.34210526315789475,\n \"acc_stderr\": 0.04462917535336936,\n \"acc_norm\": 0.34210526315789475,\n \"acc_norm_stderr\": 0.04462917535336936\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.04165774775728762,\n \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.04165774775728762\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.328042328042328,\n \"acc_stderr\": 0.024180497164376896,\n \"acc_norm\": 0.328042328042328,\n \"acc_norm_stderr\": 0.024180497164376896\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.03970158273235172,\n \"acc_norm\": 0.2698412698412698,\n \"acc_norm_stderr\": 0.03970158273235172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.567741935483871,\n \"acc_stderr\": 0.028181739720019416,\n \"acc_norm\": 0.567741935483871,\n \"acc_norm_stderr\": 0.028181739720019416\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.35960591133004927,\n \"acc_stderr\": 0.033764582465095665,\n \"acc_norm\": 0.35960591133004927,\n \"acc_norm_stderr\": 0.033764582465095665\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6121212121212121,\n \"acc_stderr\": 0.038049136539710114,\n \"acc_norm\": 0.6121212121212121,\n \"acc_norm_stderr\": 0.038049136539710114\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5858585858585859,\n \"acc_stderr\": 0.03509438348879629,\n \"acc_norm\": 0.5858585858585859,\n \"acc_norm_stderr\": 0.03509438348879629\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.6269430051813472,\n \"acc_stderr\": 0.03490205592048573,\n \"acc_norm\": 0.6269430051813472,\n \"acc_norm_stderr\": 0.03490205592048573\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.44871794871794873,\n \"acc_stderr\": 0.025217315184846475,\n \"acc_norm\": 0.44871794871794873,\n \"acc_norm_stderr\": 0.025217315184846475\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5252100840336135,\n \"acc_stderr\": 0.03243718055137411,\n \"acc_norm\": 0.5252100840336135,\n \"acc_norm_stderr\": 0.03243718055137411\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.03879687024073327,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.03879687024073327\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6128440366972477,\n \"acc_stderr\": 0.02088423199264345,\n \"acc_norm\": 0.6128440366972477,\n \"acc_norm_stderr\": 0.02088423199264345\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.39814814814814814,\n \"acc_stderr\": 0.033384734032074016,\n \"acc_norm\": 0.39814814814814814,\n \"acc_norm_stderr\": 0.033384734032074016\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6372549019607843,\n \"acc_stderr\": 0.03374499356319355,\n \"acc_norm\": 0.6372549019607843,\n \"acc_norm_stderr\": 0.03374499356319355\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.70042194092827,\n \"acc_stderr\": 0.029818024749753095,\n \"acc_norm\": 0.70042194092827,\n \"acc_norm_stderr\": 0.029818024749753095\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n \"acc_stderr\": 0.03259625118416827,\n \"acc_norm\": 0.6188340807174888,\n \"acc_norm_stderr\": 0.03259625118416827\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.04348208051644858,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.04348208051644858\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6694214876033058,\n \"acc_stderr\": 0.04294340845212093,\n \"acc_norm\": 0.6694214876033058,\n \"acc_norm_stderr\": 0.04294340845212093\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.045879047413018105,\n \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.045879047413018105\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5153374233128835,\n \"acc_stderr\": 0.03926522378708843,\n \"acc_norm\": 0.5153374233128835,\n \"acc_norm_stderr\": 0.03926522378708843\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n \"acc_stderr\": 0.04521829902833586,\n \"acc_norm\": 0.3482142857142857,\n \"acc_norm_stderr\": 0.04521829902833586\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6504854368932039,\n \"acc_stderr\": 0.047211885060971716,\n \"acc_norm\": 0.6504854368932039,\n \"acc_norm_stderr\": 0.047211885060971716\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.029343114798094462,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.029343114798094462\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.0498887651569859,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.0498887651569859\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6781609195402298,\n \"acc_stderr\": 0.0167063814150579,\n \"acc_norm\": 0.6781609195402298,\n \"acc_norm_stderr\": 0.0167063814150579\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5751445086705202,\n \"acc_stderr\": 0.02661335084026174,\n \"acc_norm\": 0.5751445086705202,\n \"acc_norm_stderr\": 0.02661335084026174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.25251396648044694,\n \"acc_stderr\": 0.014530330201468628,\n \"acc_norm\": 0.25251396648044694,\n \"acc_norm_stderr\": 0.014530330201468628\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4934640522875817,\n \"acc_stderr\": 0.028627470550556054,\n \"acc_norm\": 0.4934640522875817,\n \"acc_norm_stderr\": 0.028627470550556054\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5594855305466238,\n \"acc_stderr\": 0.028196400574197426,\n \"acc_norm\": 0.5594855305466238,\n \"acc_norm_stderr\": 0.028196400574197426\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.02774431344337654,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.02774431344337654\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.43617021276595747,\n \"acc_stderr\": 0.029583452036284073,\n \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.029583452036284073\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44198174706649285,\n \"acc_stderr\": 0.012683972513598806,\n \"acc_norm\": 0.44198174706649285,\n \"acc_norm_stderr\": 0.012683972513598806\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4852941176470588,\n \"acc_stderr\": 0.03035969707904612,\n \"acc_norm\": 0.4852941176470588,\n \"acc_norm_stderr\": 0.03035969707904612\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5343137254901961,\n \"acc_stderr\": 0.02018014484330729,\n \"acc_norm\": 0.5343137254901961,\n \"acc_norm_stderr\": 0.02018014484330729\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n \"acc_stderr\": 0.04673752333670237,\n \"acc_norm\": 0.6090909090909091,\n \"acc_norm_stderr\": 0.04673752333670237\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5673469387755102,\n \"acc_stderr\": 0.031717528240626645,\n \"acc_norm\": 0.5673469387755102,\n \"acc_norm_stderr\": 0.031717528240626645\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6467661691542289,\n \"acc_stderr\": 0.03379790611796777,\n \"acc_norm\": 0.6467661691542289,\n \"acc_norm_stderr\": 0.03379790611796777\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.6081871345029239,\n \"acc_stderr\": 0.03743979825926401,\n \"acc_norm\": 0.6081871345029239,\n \"acc_norm_stderr\": 0.03743979825926401\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28518971848225216,\n \"mc1_stderr\": 0.015805827874454892,\n \"mc2\": 0.44928868954080875,\n \"mc2_stderr\": 0.014916546411376396\n }\n}\n```", "repo_url": "https://huggingface.co/AA051610/VA", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|arc:challenge|25_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hellaswag|10_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T07-22-26.417131.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T07-22-26.417131.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T07_22_26.417131", "path": ["results_2023-10-11T07-22-26.417131.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T07-22-26.417131.parquet"]}]}]}
2023-10-11T06:23:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of AA051610/VA ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model AA051610/VA on the Open LLM Leaderboard. The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-11T07:22:26.417131(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of AA051610/VA", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/VA on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T07:22:26.417131(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of AA051610/VA", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/VA on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T07:22:26.417131(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 15, 31, 163, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AA051610/VA## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/VA on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T07:22:26.417131(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
419b4590294425fc4ed4bb3d513af401dbf927df
# Dataset Card for "tab-wnut" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
madaanpulkit/tab-wnut
[ "region:us" ]
2023-10-11T06:38:29+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tagged_text", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "0", "1": "B-DIRECT-CODE", "2": "I-DIRECT-CODE", "3": "B-DIRECT-PERSON", "4": "I-DIRECT-PERSON", "5": "B-QUASI-DATETIME", "6": "I-QUASI-DATETIME", "7": "B-QUASI-PERSON", "8": "I-QUASI-PERSON", "9": "B-QUASI-LOC", "10": "I-QUASI-LOC", "11": "B-QUASI-QUANTITY", "12": "I-QUASI-QUANTITY", "13": "B-QUASI-CODE", "14": "I-QUASI-CODE", "15": "B-QUASI-ORG", "16": "I-QUASI-ORG", "17": "B-QUASI-DEM", "18": "I-QUASI-DEM", "19": "B-QUASI-MISC", "20": "I-QUASI-MISC", "21": "B-DIRECT-ORG", "22": "I-DIRECT-ORG", "23": "B-DIRECT-DATETIME", "24": "I-DIRECT-DATETIME", "25": "B-DIRECT-LOC", "26": "I-DIRECT-LOC", "27": "B-DIRECT-MISC", "28": "I-DIRECT-MISC", "29": "B-DIRECT-DEM", "30": "I-DIRECT-DEM"}}}}], "splits": [{"name": "train", "num_bytes": 45872319, "num_examples": 1014}, {"name": "dev", "num_bytes": 3749307, "num_examples": 127}, {"name": "test", "num_bytes": 3619745, "num_examples": 127}], "download_size": 11056816, "dataset_size": 53241371}}
2023-11-02T06:07:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "tab-wnut" More Information needed
[ "# Dataset Card for \"tab-wnut\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"tab-wnut\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"tab-wnut\"\n\nMore Information needed" ]
717232fae2270c56f9f45174bcc7445f55d875e5
# Dataset Card for "hf-stack-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
smangrul/hf-stack-v2
[ "region:us" ]
2023-10-11T06:43:48+00:00
{"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 103347552, "num_examples": 6567}], "download_size": 35040642, "dataset_size": 103347552}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T06:43:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hf-stack-v2" More Information needed
[ "# Dataset Card for \"hf-stack-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hf-stack-v2\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hf-stack-v2\"\n\nMore Information needed" ]
4bc7f0476c9ce5389a079bfc6827922f276c3cc6
# Dataset Card for "my-NFT-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hongerzh/my-NFT-dataset
[ "region:us" ]
2023-10-11T06:45:16+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "test", "1": "train", "2": "validation"}}}}], "splits": [{"name": "train", "num_bytes": 948013.0, "num_examples": 7}, {"name": "validation", "num_bytes": 169094.0, "num_examples": 2}, {"name": "test", "num_bytes": 169094.0, "num_examples": 2}], "download_size": 1290909, "dataset_size": 1286201.0}}
2023-10-11T07:00:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "my-NFT-dataset" More Information needed
[ "# Dataset Card for \"my-NFT-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"my-NFT-dataset\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"my-NFT-dataset\"\n\nMore Information needed" ]
021a35ec317ad3cf25372d50d312107613114bab
# Dataset Card for "timelist_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
marcus2000/timelist_dataset
[ "region:us" ]
2023-10-11T06:53:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "summary", "path": "data/summary-*"}, {"split": "task", "path": "data/task-*"}]}], "dataset_info": {"features": [{"name": "original", "dtype": "string"}, {"name": "protocol", "dtype": "string"}, {"name": "edited_protocol", "dtype": "string"}], "splits": [{"name": "summary", "num_bytes": 1141876, "num_examples": 111}, {"name": "task", "num_bytes": 396043, "num_examples": 111}], "download_size": 728443, "dataset_size": 1537919}}
2023-10-11T06:53:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "timelist_dataset" More Information needed
[ "# Dataset Card for \"timelist_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"timelist_dataset\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"timelist_dataset\"\n\nMore Information needed" ]
a1ab1186a0fab0b8401be2e213d240e28f8c69b1
# Dataset Card for "test001" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Ahmed007/test001
[ "region:us" ]
2023-10-11T06:59:18+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16610581.0, "num_examples": 108}], "download_size": 15605780, "dataset_size": 16610581.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T06:59:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test001" More Information needed
[ "# Dataset Card for \"test001\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test001\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test001\"\n\nMore Information needed" ]
d9302f7ba2d4aa2a68c462ea1d96a15b917540c4
32 made-up insect descriptions with Latin name and order (well, there's a spider, too), as one would find in a field guide. These were created with ChatGPT 3.5 / ChatGPT 4 for the purpose of running example applications such as a "entomology field guide helper". It was chosen to use entirely fictional material to avoid inadvertently using the LLM's implicit knowledge from pretraining in the demos.
datastax/entomology
[ "size_categories:n<1K", "language:en", "license:apache-2.0", "region:us" ]
2023-10-11T07:03:34+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "pretty_name": "Fictional entomology"}
2023-10-11T07:55:50+00:00
[]
[ "en" ]
TAGS #size_categories-n<1K #language-English #license-apache-2.0 #region-us
32 made-up insect descriptions with Latin name and order (well, there's a spider, too), as one would find in a field guide. These were created with ChatGPT 3.5 / ChatGPT 4 for the purpose of running example applications such as a "entomology field guide helper". It was chosen to use entirely fictional material to avoid inadvertently using the LLM's implicit knowledge from pretraining in the demos.
[]
[ "TAGS\n#size_categories-n<1K #language-English #license-apache-2.0 #region-us \n" ]
[ 28 ]
[ "passage: TAGS\n#size_categories-n<1K #language-English #license-apache-2.0 #region-us \n" ]
0e441df9ec6c48a6ebd13cf6f424c6b5de090a50
# Dataset Card for "jo_aud" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nadsoft/Jordan-Audio
[ "region:us" ]
2023-10-11T07:17:45+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 669684377.68, "num_examples": 5044}], "download_size": 660360475, "dataset_size": 669684377.68}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T07:20:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "jo_aud" More Information needed
[ "# Dataset Card for \"jo_aud\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"jo_aud\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"jo_aud\"\n\nMore Information needed" ]
47fbc493a302accbdc0c562bd6337e2f70564d98
# Dataset Card for "angry-tweets-binary" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DDSC/angry-tweets-binary
[ "region:us" ]
2023-10-11T07:25:11+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 269093.3579427623, "num_examples": 1559}, {"name": "test", "num_bytes": 120444.7564469914, "num_examples": 684}], "download_size": 273118, "dataset_size": 389538.1143897537}}
2023-10-11T07:25:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "angry-tweets-binary" More Information needed
[ "# Dataset Card for \"angry-tweets-binary\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"angry-tweets-binary\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"angry-tweets-binary\"\n\nMore Information needed" ]
03beb076ff336a28269763c681da61b808b8d4ba
# Dataset Card for "spotlight-b-mc2-sql-create-context-enrichment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
renumics/spotlight-b-mc2-sql-create-context-enrichment
[ "region:us" ]
2023-10-11T07:29:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "answer.embedding", "sequence": "float32", "length": 2}, {"name": "question.embedding", "sequence": "float32", "length": 2}, {"name": "context.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 1885848, "num_examples": 78577}], "download_size": 2616932, "dataset_size": 1885848}}
2023-10-13T08:03:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spotlight-b-mc2-sql-create-context-enrichment" More Information needed
[ "# Dataset Card for \"spotlight-b-mc2-sql-create-context-enrichment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spotlight-b-mc2-sql-create-context-enrichment\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-b-mc2-sql-create-context-enrichment\"\n\nMore Information needed" ]
9e373039fc6b7d99eed02e1480490631127cb3cf
# Dataset Card for Evaluation run of crumb/gpt2023 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/crumb/gpt2023 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [crumb/gpt2023](https://huggingface.co/crumb/gpt2023) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_crumb__gpt2023", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T11:33:48.204905](https://huggingface.co/datasets/open-llm-leaderboard/details_crumb__gpt2023/blob/main/results_2023-10-24T11-33-48.204905.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0012583892617449664, "em_stderr": 0.0003630560893119132, "f1": 0.04730285234899332, "f1_stderr": 0.0013435226639105919, "acc": 0.25210824971442214, "acc_stderr": 0.007783509925876781 }, "harness|drop|3": { "em": 0.0012583892617449664, "em_stderr": 0.0003630560893119132, "f1": 0.04730285234899332, "f1_stderr": 0.0013435226639105919 }, "harness|gsm8k|5": { "acc": 0.003032600454890068, "acc_stderr": 0.0015145735612245494 }, "harness|winogrande|5": { "acc": 0.5011838989739542, "acc_stderr": 0.014052446290529012 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_crumb__gpt2023
[ "region:us" ]
2023-10-11T07:31:08+00:00
{"pretty_name": "Evaluation run of crumb/gpt2023", "dataset_summary": "Dataset automatically created during the evaluation run of model [crumb/gpt2023](https://huggingface.co/crumb/gpt2023) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_crumb__gpt2023\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T11:33:48.204905](https://huggingface.co/datasets/open-llm-leaderboard/details_crumb__gpt2023/blob/main/results_2023-10-24T11-33-48.204905.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.0003630560893119132,\n \"f1\": 0.04730285234899332,\n \"f1_stderr\": 0.0013435226639105919,\n \"acc\": 0.25210824971442214,\n \"acc_stderr\": 0.007783509925876781\n },\n \"harness|drop|3\": {\n \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.0003630560893119132,\n \"f1\": 0.04730285234899332,\n \"f1_stderr\": 0.0013435226639105919\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.003032600454890068,\n \"acc_stderr\": 0.0015145735612245494\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5011838989739542,\n \"acc_stderr\": 0.014052446290529012\n }\n}\n```", "repo_url": "https://huggingface.co/crumb/gpt2023", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|arc:challenge|25_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T11_33_48.204905", "path": ["**/details_harness|drop|3_2023-10-24T11-33-48.204905.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T11-33-48.204905.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T11_33_48.204905", "path": ["**/details_harness|gsm8k|5_2023-10-24T11-33-48.204905.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T11-33-48.204905.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hellaswag|10_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T08-30-54.655929.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T08-30-54.655929.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T08-30-54.655929.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T11_33_48.204905", "path": ["**/details_harness|winogrande|5_2023-10-24T11-33-48.204905.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T11-33-48.204905.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T08_30_54.655929", "path": ["results_2023-10-11T08-30-54.655929.parquet"]}, {"split": "2023_10_24T11_33_48.204905", "path": ["results_2023-10-24T11-33-48.204905.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T11-33-48.204905.parquet"]}]}]}
2023-10-24T10:34:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of crumb/gpt2023 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model crumb/gpt2023 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T11:33:48.204905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of crumb/gpt2023", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model crumb/gpt2023 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T11:33:48.204905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of crumb/gpt2023", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model crumb/gpt2023 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T11:33:48.204905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 16, 31, 164, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of crumb/gpt2023## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model crumb/gpt2023 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T11:33:48.204905(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
063bef3c1506bf2e01dfc80a9c0432c963f0dd72
# Dataset Card for "spotlight-gigant-horse2zebra-enrichment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
renumics/spotlight-gigant-horse2zebra-enrichment
[ "region:us" ]
2023-10-11T07:39:06+00:00
{"dataset_info": {"config_name": "horse", "features": [{"name": "image.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 8536, "num_examples": 1067}, {"name": "test", "num_bytes": 960, "num_examples": 120}], "download_size": 15203, "dataset_size": 9496}, "configs": [{"config_name": "horse", "data_files": [{"split": "train", "path": "horse/train-*"}, {"split": "test", "path": "horse/test-*"}]}]}
2023-10-13T08:26:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spotlight-gigant-horse2zebra-enrichment" More Information needed
[ "# Dataset Card for \"spotlight-gigant-horse2zebra-enrichment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spotlight-gigant-horse2zebra-enrichment\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-gigant-horse2zebra-enrichment\"\n\nMore Information needed" ]
7b28288e0786e284e1d7968fbb0c933c77c85b65
# Dataset Card for "spotlight-laion-dalle-3-dataset-enrichment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
renumics/spotlight-laion-dalle-3-dataset-enrichment
[ "region:us" ]
2023-10-11T08:01:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "caption.embedding", "sequence": "float32", "length": 2}, {"name": "link.embedding", "sequence": "float32", "length": 2}, {"name": "message_id.embedding", "sequence": "float32", "length": 2}, {"name": "timestamp.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 47200, "num_examples": 1475}], "download_size": 67788, "dataset_size": 47200}}
2023-10-11T12:28:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spotlight-laion-dalle-3-dataset-enrichment" More Information needed
[ "# Dataset Card for \"spotlight-laion-dalle-3-dataset-enrichment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spotlight-laion-dalle-3-dataset-enrichment\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-laion-dalle-3-dataset-enrichment\"\n\nMore Information needed" ]
a6bdf14dbd4b3dc129ba0ae5a64459ec42cff1da
# Dataset Card for "spotlight-nelorth-oxford-flowers-enrichment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
renumics/spotlight-nelorth-oxford-flowers-enrichment
[ "region:us" ]
2023-10-11T08:05:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 57352, "num_examples": 7169}, {"name": "test", "num_bytes": 8160, "num_examples": 1020}], "download_size": 92937, "dataset_size": 65512}}
2023-10-13T08:45:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spotlight-nelorth-oxford-flowers-enrichment" More Information needed
[ "# Dataset Card for \"spotlight-nelorth-oxford-flowers-enrichment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spotlight-nelorth-oxford-flowers-enrichment\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-nelorth-oxford-flowers-enrichment\"\n\nMore Information needed" ]
90ce61699e90568db82cdce4c4035c6918d70aa6
# Huatuo26M-Lite 📚 - ## Table of Contents 🗂 - [Dataset Description](#dataset-description) 📝 - [Dataset Information](#dataset-information) ℹ️ - [Data Distribution](#data-distribution) 📊 - [Usage](#usage) 🔧 - [Citation](#citation) 📖 ## Dataset Description 📝 Huatuo26M-Lite is a refined and optimized dataset based on the Huatuo26M dataset, which has undergone multiple purification processes and rewrites. It has more data dimensions and higher data quality. We welcome you to try using it. ## Dataset Information ℹ️ - **Dataset Name:** Huatuo26M-Lite - **Version:** _[0.0.1]_ - **Size:** _[178k]_ - **Language:** _[Chinese]_ ### Abstract 📄 We collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods. ### Data Collection 🕵️‍♂️ ur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence. ### Preprocessing/Cleaning 🧹 The dataset has been processed to remove duplicates and cleaned to ensure high-quality data. It was then refined using OpenAI's ChatGPT, which helped in enhancing the overall quality of the dataset. ## Data Distribution 📊 This section provides a visual overview of the distribution of data in the Huatuo26M-Lite dataset. **Data Categories Bar Chart:** ![label](http://file.huatuogpt.cn/files/models_ref/huatuo26m/high_quality_huatuoshine.png) This chart represents the distribution of data categories in the dataset. **Top 20 Associated Diseases Table:** | topn | disease | nums | ratio | | ---- | ---------- | ---- | ------- | | 1 | 白癜风 | 3308 | 1.8615% | | 2 | 人流 | 2686 | 1.5115% | | 3 | 感冒 | 2371 | 1.3342% | | 4 | 癫痫 | 2217 | 1.2476% | | 5 | 痔疮 | 2134 | 1.2009% | | 6 | 疼痛 | 1842 | 1.0366% | | 7 | 咳嗽 | 1799 | 1.0124% | | 8 | 前列腺炎 | 1564 | 0.8801% | | 9 | 尖锐湿疣 | 1516 | 0.8531% | | 10 | 肺癌 | 1408 | 0.7923% | | 11 | 出血 | 1400 | 0.7878% | | 12 | 鼻炎 | 1370 | 0.7709% | | 13 | 肝癌 | 1354 | 0.7619% | | 14 | 糖尿病 | 1348 | 0.7586% | | 15 | 过敏性鼻炎 | 1295 | 0.7287% | | 16 | 发烧 | 1265 | 0.7119% | | 17 | 乙肝 | 1232 | 0.6933% | | 18 | 便秘 | 1214 | 0.6832% | | 19 | 甲亢 | 1178 | 0.6629% | | 20 | 脱发 | 1173 | 0.6601% | This table shows the top 20 diseases associated with the data entries in the dataset, along with their respective data entry counts and proportions. ## Usage 🔧 ```python from datasets import load_dataset dataset = load_dataset("FreedomIntelligence/Huatuo26M-Lite") ``` ## Citation 📖 ``` @misc{li2023huatuo26m, title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset}, author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang}, year={2023}, eprint={2305.01526}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- Please note that this dataset is distributed "AS IS" without any warranty, express or implied, from the provider. Users should cite the dataset appropriately and respect any licensing or usage restrictions.
FreedomIntelligence/Huatuo26M-Lite
[ "task_categories:text-classification", "task_categories:question-answering", "task_categories:conversational", "task_categories:text-generation", "size_categories:100K<n<1M", "language:zh", "license:apache-2.0", "medical", "arxiv:2305.01526", "region:us" ]
2023-10-11T08:08:49+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "question-answering", "conversational", "text-generation"], "pretty_name": "Huatuo26M_v2", "tags": ["medical"]}
2023-11-29T08:46:31+00:00
[ "2305.01526" ]
[ "zh" ]
TAGS #task_categories-text-classification #task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #license-apache-2.0 #medical #arxiv-2305.01526 #region-us
Huatuo26M-Lite ============== * Table of Contents ----------------- + Dataset Description + Dataset Information ℹ️ + Data Distribution + Usage + Citation Dataset Description ------------------- Huatuo26M-Lite is a refined and optimized dataset based on the Huatuo26M dataset, which has undergone multiple purification processes and rewrites. It has more data dimensions and higher data quality. We welcome you to try using it. Dataset Information ℹ️ ---------------------- * Dataset Name: Huatuo26M-Lite * Version: *[0.0.1]* * Size: *[178k]* * Language: *[Chinese]* ### Abstract We collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods. ### Data Collection ️‍️ ur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence. ### Preprocessing/Cleaning The dataset has been processed to remove duplicates and cleaned to ensure high-quality data. It was then refined using OpenAI's ChatGPT, which helped in enhancing the overall quality of the dataset. Data Distribution ----------------- This section provides a visual overview of the distribution of data in the Huatuo26M-Lite dataset. Data Categories Bar Chart: !label This chart represents the distribution of data categories in the dataset. Top 20 Associated Diseases Table: This table shows the top 20 diseases associated with the data entries in the dataset, along with their respective data entry counts and proportions. Usage ----- --- Please note that this dataset is distributed "AS IS" without any warranty, express or implied, from the provider. Users should cite the dataset appropriately and respect any licensing or usage restrictions.
[ "### Abstract\n\n\nWe collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods.", "### Data Collection ️‍️\n\n\nur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence.", "### Preprocessing/Cleaning\n\n\nThe dataset has been processed to remove duplicates and cleaned to ensure high-quality data. It was then refined using OpenAI's ChatGPT, which helped in enhancing the overall quality of the dataset.\n\n\nData Distribution\n-----------------\n\n\nThis section provides a visual overview of the distribution of data in the Huatuo26M-Lite dataset.\n\n\nData Categories Bar Chart: !label\n\n\nThis chart represents the distribution of data categories in the dataset.\n\n\nTop 20 Associated Diseases Table:\n\n\n\nThis table shows the top 20 diseases associated with the data entries in the dataset, along with their respective data entry counts and proportions.\n\n\nUsage\n-----\n\n\n\n\n---\n\n\nPlease note that this dataset is distributed \"AS IS\" without any warranty, express or implied, from the provider. Users should cite the dataset appropriately and respect any licensing or usage restrictions." ]
[ "TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #license-apache-2.0 #medical #arxiv-2305.01526 #region-us \n", "### Abstract\n\n\nWe collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods.", "### Data Collection ️‍️\n\n\nur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence.", "### Preprocessing/Cleaning\n\n\nThe dataset has been processed to remove duplicates and cleaned to ensure high-quality data. It was then refined using OpenAI's ChatGPT, which helped in enhancing the overall quality of the dataset.\n\n\nData Distribution\n-----------------\n\n\nThis section provides a visual overview of the distribution of data in the Huatuo26M-Lite dataset.\n\n\nData Categories Bar Chart: !label\n\n\nThis chart represents the distribution of data categories in the dataset.\n\n\nTop 20 Associated Diseases Table:\n\n\n\nThis table shows the top 20 diseases associated with the data entries in the dataset, along with their respective data entry counts and proportions.\n\n\nUsage\n-----\n\n\n\n\n---\n\n\nPlease note that this dataset is distributed \"AS IS\" without any warranty, express or implied, from the provider. Users should cite the dataset appropriately and respect any licensing or usage restrictions." ]
[ 87, 140, 82, 200 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-Chinese #license-apache-2.0 #medical #arxiv-2305.01526 #region-us \n### Abstract\n\n\nWe collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods.### Data Collection ️‍️\n\n\nur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence." ]
f9e0312c4d1bd5e948f6ed1576219e1b0720e366
# Distil Whisper: LibriSpeech ASR This is a variant of the [LibriSpeech ASR](https://huggingface.co/datasets/librispeech_asr) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/librispeech_asr). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/librispeech_asr", "all") # take the first sample of the validation set sample = dataset["validation.clean"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/librispeech_asr", "all", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation.clean"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc-by-4.0.
distil-whisper/librispeech_asr-token-ids
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc-by-4.0", "region:us" ]
2023-10-11T08:08:50+00:00
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "LibriSpeech ASR"}
2023-10-11T08:44:39+00:00
[]
[ "en" ]
TAGS #task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us
# Distil Whisper: LibriSpeech ASR This is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper large-v2 model with *greedy* sampling. For information on how the original dataset was curated, refer to the original dataset card. ## Standalone Usage First, install the latest version of the Datasets package: The dataset can be downloaded and pre-processed on disk using the 'load_dataset' function: It can also be streamed directly from the Hub using Datasets' streaming mode. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the Distil Whisper repository. ## License This dataset is licensed under cc-by-4.0.
[ "# Distil Whisper: LibriSpeech ASR \n\nThis is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.", "## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:", "## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.", "## License\n\nThis dataset is licensed under cc-by-4.0." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n", "# Distil Whisper: LibriSpeech ASR \n\nThis is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.", "## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:", "## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.", "## License\n\nThis dataset is licensed under cc-by-4.0." ]
[ 35, 110, 92, 40, 16 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n# Distil Whisper: LibriSpeech ASR \n\nThis is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-4.0." ]
abc1dcec26ed3947cd6d2576f269ba59ee890ffb
# Distil Whisper: GigaSpeech This is a variant of the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/gigaspeech-l", "l") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/gigaspeech-l", "l", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
distil-whisper/gigaspeech-l-token-ids
[ "task_categories:automatic-speech-recognition", "language:en", "license:other", "region:us" ]
2023-10-11T08:09:32+00:00
{"language": ["en"], "license": "other", "task_categories": ["automatic-speech-recognition"], "extra_gated_prompt": "SpeechColab does not own the copyright of the audio files. For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through the Hub under certain conditions and terms. \nTerms of Access:\nThe \"Researcher\" has requested permission to use the GigaSpeech database (the \"Database\") at Tsinghua University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. The SpeechColab team and Tsinghua University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the SpeechColab team and Tsinghua University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. The SpeechColab team and Tsinghua University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n\nPlease also fill out the Google Form https://forms.gle/UuGQAPyscGRrUMLq6 to request access to the GigaSpeech dataset.", "extra_gated_fields": {"Name": "text", "Email": "text", "Organization": "text", "Address": "text", "I hereby confirm that I have requested access via the Google Form provided above": "checkbox", "I accept the terms of access": "checkbox"}}
2023-10-11T08:44:39+00:00
[]
[ "en" ]
TAGS #task_categories-automatic-speech-recognition #language-English #license-other #region-us
# Distil Whisper: GigaSpeech This is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper large-v2 model with *greedy* sampling. For information on how the original dataset was curated, refer to the original dataset card. ## Standalone Usage First, install the latest version of the Datasets package: The dataset can be downloaded and pre-processed on disk using the 'load_dataset' function: It can also be streamed directly from the Hub using Datasets' streaming mode. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the Distil Whisper repository. ## License This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card.
[ "# Distil Whisper: GigaSpeech \n\nThis is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.", "## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:", "## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.", "## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #language-English #license-other #region-us \n", "# Distil Whisper: GigaSpeech \n\nThis is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.", "## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:", "## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.", "## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card." ]
[ 31, 108, 92, 40, 30 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-other #region-us \n# Distil Whisper: GigaSpeech \n\nThis is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card." ]
00de8be81c5c24f9048156d691e89cbbf0b66088
# Dataset Card for Hugging Face Hub Dataset Cards This datasets consists of [dataset cards](https://huggingface.co/docs/hub/datasets-cards) for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub. This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new [discussion](https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/discussions/new). ## Dataset Details ### Dataset Description - **Curated by:** Daniel van Strien - **Language(s) (NLP):** Dataset cards on the Hugging Face Hub are predominantly in English but may include other languages. ## Uses There are a number of potential uses for this dataset including: - text mining to find common themes in dataset cards - analysis of the dataset card format/content - topic modelling of dataset cards - training language models on the dataset cards ### Out-of-Scope Use [More Information Needed] ## Dataset Structure This dataset has a single split. ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format. ### Source Data The source data is `README.md` files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory. #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> The data is downloaded using a CRON job on a daily basis. #### Who are the source data producers? The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API. ### Annotations [optional] There are no additional annotations in this dataset beyond the dataset card content. #### Annotation process N/A #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> N/A #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias. Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page. ## Dataset Card Authors [@davanstrien](https://huggingface.co/davanstrien) ## Dataset Card Contact [@davanstrien](https://huggingface.co/davanstrien)
librarian-bots/dataset_cards_with_metadata
[ "task_categories:text-retrieval", "size_categories:10K<n<100K", "ethics", "documentation", "region:us" ]
2023-10-11T08:15:10+00:00
{"size_categories": ["10K<n<100K"], "task_categories": ["text-retrieval"], "dataset_info": {"features": [{"name": "datasetId", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "last_modified", "dtype": "timestamp[us, tz=UTC]"}, {"name": "downloads", "dtype": "int64"}, {"name": "likes", "dtype": "int64"}, {"name": "tags", "sequence": "string"}, {"name": "task_categories", "sequence": "string"}, {"name": "createdAt", "dtype": "timestamp[us, tz=UTC]"}, {"name": "card", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 401585922, "num_examples": 72883}], "download_size": 94531421, "dataset_size": 401585922}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["ethics", "documentation"]}
2024-02-17T01:31:23+00:00
[]
[]
TAGS #task_categories-text-retrieval #size_categories-10K<n<100K #ethics #documentation #region-us
# Dataset Card for Hugging Face Hub Dataset Cards This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub. This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion. ## Dataset Details ### Dataset Description - Curated by: Daniel van Strien - Language(s) (NLP): Dataset cards on the Hugging Face Hub are predominantly in English but may include other languages. ## Uses There are a number of potential uses for this dataset including: - text mining to find common themes in dataset cards - analysis of the dataset card format/content - topic modelling of dataset cards - training language models on the dataset cards ### Out-of-Scope Use ## Dataset Structure This dataset has a single split. ## Dataset Creation ### Curation Rationale The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format. ### Source Data The source data is 'URL' files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory. #### Data Collection and Processing The data is downloaded using a CRON job on a daily basis. #### Who are the source data producers? The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API. ### Annotations [optional] There are no additional annotations in this dataset beyond the dataset card content. #### Annotation process N/A #### Who are the annotators? N/A #### Personal and Sensitive Information We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses. ## Bias, Risks, and Limitations Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias. Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences. ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page. ## Dataset Card Authors @davanstrien ## Dataset Card Contact @davanstrien
[ "# Dataset Card for Hugging Face Hub Dataset Cards\n\nThis datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. \nThis dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.\n\nThis dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.", "## Dataset Details", "### Dataset Description\n\n\n- Curated by: Daniel van Strien\n- Language(s) (NLP): Dataset cards on the Hugging Face Hub are predominantly in English but may include other languages.", "## Uses\n\nThere are a number of potential uses for this dataset including:\n- text mining to find common themes in dataset cards\n- analysis of the dataset card format/content\n- topic modelling of dataset cards\n- training language models on the dataset cards", "### Out-of-Scope Use", "## Dataset Structure\n\nThis dataset has a single split.", "## Dataset Creation", "### Curation Rationale\n\n\n\nThe dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.", "### Source Data\n\nThe source data is 'URL' files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.", "#### Data Collection and Processing\n\n\n\nThe data is downloaded using a CRON job on a daily basis.", "#### Who are the source data producers?\n\nThe source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.", "### Annotations [optional]\n\nThere are no additional annotations in this dataset beyond the dataset card content.", "#### Annotation process\n\nN/A", "#### Who are the annotators?\n\n\n\nN/A", "#### Personal and Sensitive Information\n\n\n\nWe make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.", "## Bias, Risks, and Limitations\n\n\n\nDataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. \nSome dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias. \n\nWhilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nNo formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.", "## Dataset Card Authors \n\n@davanstrien", "## Dataset Card Contact\n\n@davanstrien" ]
[ "TAGS\n#task_categories-text-retrieval #size_categories-10K<n<100K #ethics #documentation #region-us \n", "# Dataset Card for Hugging Face Hub Dataset Cards\n\nThis datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. \nThis dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.\n\nThis dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.", "## Dataset Details", "### Dataset Description\n\n\n- Curated by: Daniel van Strien\n- Language(s) (NLP): Dataset cards on the Hugging Face Hub are predominantly in English but may include other languages.", "## Uses\n\nThere are a number of potential uses for this dataset including:\n- text mining to find common themes in dataset cards\n- analysis of the dataset card format/content\n- topic modelling of dataset cards\n- training language models on the dataset cards", "### Out-of-Scope Use", "## Dataset Structure\n\nThis dataset has a single split.", "## Dataset Creation", "### Curation Rationale\n\n\n\nThe dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.", "### Source Data\n\nThe source data is 'URL' files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.", "#### Data Collection and Processing\n\n\n\nThe data is downloaded using a CRON job on a daily basis.", "#### Who are the source data producers?\n\nThe source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.", "### Annotations [optional]\n\nThere are no additional annotations in this dataset beyond the dataset card content.", "#### Annotation process\n\nN/A", "#### Who are the annotators?\n\n\n\nN/A", "#### Personal and Sensitive Information\n\n\n\nWe make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.", "## Bias, Risks, and Limitations\n\n\n\nDataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. \nSome dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias. \n\nWhilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nNo formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.", "## Dataset Card Authors \n\n@davanstrien", "## Dataset Card Contact\n\n@davanstrien" ]
[ 36, 170, 4, 45, 57, 9, 14, 5, 82, 45, 22, 86, 27, 8, 12, 65, 156, 66, 11, 10 ]
[ "passage: TAGS\n#task_categories-text-retrieval #size_categories-10K<n<100K #ethics #documentation #region-us \n# Dataset Card for Hugging Face Hub Dataset Cards\n\nThis datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. \nThis dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.\n\nThis dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.## Dataset Details### Dataset Description\n\n\n- Curated by: Daniel van Strien\n- Language(s) (NLP): Dataset cards on the Hugging Face Hub are predominantly in English but may include other languages.## Uses\n\nThere are a number of potential uses for this dataset including:\n- text mining to find common themes in dataset cards\n- analysis of the dataset card format/content\n- topic modelling of dataset cards\n- training language models on the dataset cards### Out-of-Scope Use## Dataset Structure\n\nThis dataset has a single split.## Dataset Creation### Curation Rationale\n\n\n\nThe dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.### Source Data\n\nThe source data is 'URL' files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.#### Data Collection and Processing\n\n\n\nThe data is downloaded using a CRON job on a daily basis." ]
e98f01adc1dd0dbd5b4524a44bd656541c28e352
# Dataset Card for Evaluation run of SkunkworksAI/Mistralic-7B-1 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/SkunkworksAI/Mistralic-7B-1 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [SkunkworksAI/Mistralic-7B-1](https://huggingface.co/SkunkworksAI/Mistralic-7B-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SkunkworksAI__Mistralic-7B-1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-28T13:22:20.115560](https://huggingface.co/datasets/open-llm-leaderboard/details_SkunkworksAI__Mistralic-7B-1/blob/main/results_2023-10-28T13-22-20.115560.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.3366191275167785, "em_stderr": 0.004839388843031059, "f1": 0.43708682885906275, "f1_stderr": 0.004627060310059935, "acc": 0.44050675782818416, "acc_stderr": 0.010231909076615354 }, "harness|drop|3": { "em": 0.3366191275167785, "em_stderr": 0.004839388843031059, "f1": 0.43708682885906275, "f1_stderr": 0.004627060310059935 }, "harness|gsm8k|5": { "acc": 0.1106899166034875, "acc_stderr": 0.008642172551392479 }, "harness|winogrande|5": { "acc": 0.7703235990528808, "acc_stderr": 0.011821645601838227 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_SkunkworksAI__Mistralic-7B-1
[ "region:us" ]
2023-10-11T08:21:44+00:00
{"pretty_name": "Evaluation run of SkunkworksAI/Mistralic-7B-1", "dataset_summary": "Dataset automatically created during the evaluation run of model [SkunkworksAI/Mistralic-7B-1](https://huggingface.co/SkunkworksAI/Mistralic-7B-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SkunkworksAI__Mistralic-7B-1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T13:22:20.115560](https://huggingface.co/datasets/open-llm-leaderboard/details_SkunkworksAI__Mistralic-7B-1/blob/main/results_2023-10-28T13-22-20.115560.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3366191275167785,\n \"em_stderr\": 0.004839388843031059,\n \"f1\": 0.43708682885906275,\n \"f1_stderr\": 0.004627060310059935,\n \"acc\": 0.44050675782818416,\n \"acc_stderr\": 0.010231909076615354\n },\n \"harness|drop|3\": {\n \"em\": 0.3366191275167785,\n \"em_stderr\": 0.004839388843031059,\n \"f1\": 0.43708682885906275,\n \"f1_stderr\": 0.004627060310059935\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1106899166034875,\n \"acc_stderr\": 0.008642172551392479\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.011821645601838227\n }\n}\n```", "repo_url": "https://huggingface.co/SkunkworksAI/Mistralic-7B-1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|arc:challenge|25_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T13_22_20.115560", "path": ["**/details_harness|drop|3_2023-10-28T13-22-20.115560.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T13-22-20.115560.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T13_22_20.115560", "path": ["**/details_harness|gsm8k|5_2023-10-28T13-22-20.115560.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T13-22-20.115560.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hellaswag|10_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T09-21-21.065888.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T09-21-21.065888.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T09-21-21.065888.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T13_22_20.115560", "path": ["**/details_harness|winogrande|5_2023-10-28T13-22-20.115560.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T13-22-20.115560.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T09_21_21.065888", "path": ["results_2023-10-11T09-21-21.065888.parquet"]}, {"split": "2023_10_28T13_22_20.115560", "path": ["results_2023-10-28T13-22-20.115560.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T13-22-20.115560.parquet"]}]}]}
2023-10-28T12:22:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SkunkworksAI/Mistralic-7B-1 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model SkunkworksAI/Mistralic-7B-1 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-28T13:22:20.115560(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of SkunkworksAI/Mistralic-7B-1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model SkunkworksAI/Mistralic-7B-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T13:22:20.115560(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SkunkworksAI/Mistralic-7B-1", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model SkunkworksAI/Mistralic-7B-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T13:22:20.115560(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 21, 31, 169, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of SkunkworksAI/Mistralic-7B-1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model SkunkworksAI/Mistralic-7B-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T13:22:20.115560(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
28d1e0e4f82e3d73d03d7ced54842329c7571c45
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-falcon-180b-v13-preview0 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/OpenBuddy/openbuddy-falcon-180b-v13-preview0 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-falcon-180b-v13-preview0](https://huggingface.co/OpenBuddy/openbuddy-falcon-180b-v13-preview0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-falcon-180b-v13-preview0", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T12:56:17.890074](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-falcon-180b-v13-preview0/blob/main/results_2023-10-24T12-56-17.890074.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.490876677852349, "em_stderr": 0.005119615515857085, "f1": 0.5498133389261767, "f1_stderr": 0.004838031306299291, "acc": 0.6212929481268546, "acc_stderr": 0.01211195240749183 }, "harness|drop|3": { "em": 0.490876677852349, "em_stderr": 0.005119615515857085, "f1": 0.5498133389261767, "f1_stderr": 0.004838031306299291 }, "harness|gsm8k|5": { "acc": 0.4162244124336619, "acc_stderr": 0.013577788334652662 }, "harness|winogrande|5": { "acc": 0.8263614838200474, "acc_stderr": 0.010646116480331 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_OpenBuddy__openbuddy-falcon-180b-v13-preview0
[ "region:us" ]
2023-10-11T08:27:26+00:00
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-falcon-180b-v13-preview0", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-falcon-180b-v13-preview0](https://huggingface.co/OpenBuddy/openbuddy-falcon-180b-v13-preview0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-falcon-180b-v13-preview0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T12:56:17.890074](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-falcon-180b-v13-preview0/blob/main/results_2023-10-24T12-56-17.890074.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.490876677852349,\n \"em_stderr\": 0.005119615515857085,\n \"f1\": 0.5498133389261767,\n \"f1_stderr\": 0.004838031306299291,\n \"acc\": 0.6212929481268546,\n \"acc_stderr\": 0.01211195240749183\n },\n \"harness|drop|3\": {\n \"em\": 0.490876677852349,\n \"em_stderr\": 0.005119615515857085,\n \"f1\": 0.5498133389261767,\n \"f1_stderr\": 0.004838031306299291\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4162244124336619,\n \"acc_stderr\": 0.013577788334652662\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8263614838200474,\n \"acc_stderr\": 0.010646116480331\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-falcon-180b-v13-preview0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|arc:challenge|25_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|arc:challenge|25_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T12_56_17.890074", "path": ["**/details_harness|drop|3_2023-10-24T12-56-17.890074.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T12-56-17.890074.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T12_56_17.890074", "path": ["**/details_harness|gsm8k|5_2023-10-24T12-56-17.890074.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T12-56-17.890074.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hellaswag|10_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hellaswag|10_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T09-27-08.727010.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T10-53-08.711708.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T10-53-08.711708.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T10-53-08.711708.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T12_56_17.890074", "path": ["**/details_harness|winogrande|5_2023-10-24T12-56-17.890074.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T12-56-17.890074.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T09_27_08.727010", "path": ["results_2023-10-11T09-27-08.727010.parquet"]}, {"split": "2023_10_11T10_53_08.711708", "path": ["results_2023-10-11T10-53-08.711708.parquet"]}, {"split": "2023_10_24T12_56_17.890074", "path": ["results_2023-10-24T12-56-17.890074.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T12-56-17.890074.parquet"]}]}]}
2023-10-24T11:56:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-falcon-180b-v13-preview0 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-falcon-180b-v13-preview0 on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T12:56:17.890074(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-falcon-180b-v13-preview0", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-falcon-180b-v13-preview0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T12:56:17.890074(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-falcon-180b-v13-preview0", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-falcon-180b-v13-preview0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T12:56:17.890074(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 29, 31, 177, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-falcon-180b-v13-preview0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-falcon-180b-v13-preview0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T12:56:17.890074(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
a6344bd9b0e40445deb22dc6a37522b602d8091c
# Dataset Card for "new_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
St4n/new_dataset
[ "region:us" ]
2023-10-11T08:33:34+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25020, "num_examples": 100}], "download_size": 0, "dataset_size": 25020}}
2023-10-11T09:04:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "new_dataset" More Information needed
[ "# Dataset Card for \"new_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"new_dataset\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"new_dataset\"\n\nMore Information needed" ]
0e1c443a4655978c141dde3146b8cd18fd3f25d9
# Dataset Card for "permutated-wikitext" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yavasde/permutated-wikitext
[ "region:us" ]
2023-10-11T08:41:33+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11118443, "num_examples": 98407}, {"name": "test", "num_bytes": 1312320, "num_examples": 11960}, {"name": "valid", "num_bytes": 1165858, "num_examples": 10360}], "download_size": 8428865, "dataset_size": 13596621}}
2023-10-11T08:41:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "permutated-wikitext" More Information needed
[ "# Dataset Card for \"permutated-wikitext\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"permutated-wikitext\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"permutated-wikitext\"\n\nMore Information needed" ]
c31fc75d9d7d28cfca4a9839c97bf94f9abf839e
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13-base ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13-base", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-23T13:40:07.826401](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13-base/blob/main/results_2023-10-23T13-40-07.826401.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.28544463087248323, "em_stderr": 0.004625072383719666, "f1": 0.3571770134228197, "f1_stderr": 0.004531759792948092, "acc": 0.3628134250613192, "acc_stderr": 0.007861162191425665 }, "harness|drop|3": { "em": 0.28544463087248323, "em_stderr": 0.004625072383719666, "f1": 0.3571770134228197, "f1_stderr": 0.004531759792948092 }, "harness|gsm8k|5": { "acc": 0.012130401819560273, "acc_stderr": 0.003015294242890952 }, "harness|winogrande|5": { "acc": 0.7134964483030781, "acc_stderr": 0.01270703013996038 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13-base
[ "region:us" ]
2023-10-11T08:56:43+00:00
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13-base", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13-base\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T13:40:07.826401](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13-base/blob/main/results_2023-10-23T13-40-07.826401.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.28544463087248323,\n \"em_stderr\": 0.004625072383719666,\n \"f1\": 0.3571770134228197,\n \"f1_stderr\": 0.004531759792948092,\n \"acc\": 0.3628134250613192,\n \"acc_stderr\": 0.007861162191425665\n },\n \"harness|drop|3\": {\n \"em\": 0.28544463087248323,\n \"em_stderr\": 0.004625072383719666,\n \"f1\": 0.3571770134228197,\n \"f1_stderr\": 0.004531759792948092\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.012130401819560273,\n \"acc_stderr\": 0.003015294242890952\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7134964483030781,\n \"acc_stderr\": 0.01270703013996038\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|arc:challenge|25_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T13_40_07.826401", "path": ["**/details_harness|drop|3_2023-10-23T13-40-07.826401.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T13-40-07.826401.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T13_40_07.826401", "path": ["**/details_harness|gsm8k|5_2023-10-23T13-40-07.826401.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T13-40-07.826401.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hellaswag|10_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T09-56-20.350161.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T09-56-20.350161.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T09-56-20.350161.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T13_40_07.826401", "path": ["**/details_harness|winogrande|5_2023-10-23T13-40-07.826401.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T13-40-07.826401.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T09_56_20.350161", "path": ["results_2023-10-11T09-56-20.350161.parquet"]}, {"split": "2023_10_23T13_40_07.826401", "path": ["results_2023-10-23T13-40-07.826401.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T13-40-07.826401.parquet"]}]}]}
2023-10-23T12:40:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13-base ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13-base on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-23T13:40:07.826401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13-base", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-23T13:40:07.826401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13-base", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-23T13:40:07.826401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 26, 31, 174, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13-base## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T13:40:07.826401(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
5b19c40010458f23513fbc94c8349839c537d832
# Introduction <!-- Provide a quick summary of the dataset. --> DataOcean AI (SHA stock code: 688787), founded in 2005, is one of the earliest AI training data solution providers in China. As the first listed enterprise in AI training data domestically, DataOcean AI is committed to providing AI datasets and services for AI enterprises and R&D institutions. DataOcean AI specializes in delivering comprehensive, multilingual, cross-domain, and multimodal AI datasets, along with a range of data-related services. Our offerings include data annotation, data collection, data design, and modal evaluation, catering to the diverse needs of enterprises across various industries. Our services encompass essential domains such as smart voice (including voice recognition and voice synthesis), computer vision, and natural language processing, spanning a wide array of approximately 200 primary languages and dialects from around the globe. DataOcean AI has been actively involved in the industry for nearly two decades and has developed close to 700 deep partnerships with leading IT companies, academic institutions, and emerging AI enterprises. It has delivered thousands of customized projects successfully and gained the deep trust of customers by focusing on competent, dependable, and safe data services. The company’s superior resources which cover 190+ languages and dialects in more than 70 countries, as well as its technologically leading algorithm R&D team and well-experienced project teams, are valuable assets of the company that contribute to the overall successful implementation of frontier AI projects around the world. ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [DATAOCEAN AI](https://en.dataoceanai.com/) - **License:** Commercial Check out the [files](https://huggingface.co/datasets/DataOceanAI/Off_the_self_dataset/tree/main) or visit our website for details ## Contact You can alwasy contact us via email "[email protected]" or fill up the [contact form](https://en.dataoceanai.com/?m=index&c=dsvoice&a=consult&aboutus_id=9619) in our website ' https://en.dataoceanai.com/ ' <!-- Address questions around how the dataset is intended to be used. -->
DataOceanAI/Off_the_self_dataset
[ "task_categories:conversational", "task_categories:text-generation", "license:unknown", "datasets", "dataoceanai", "speechocean", "ASR", "TTS", "region:us" ]
2023-10-11T09:05:01+00:00
{"license": "unknown", "task_categories": ["conversational", "text-generation"], "pretty_name": "DataOcean AI - Off the self datasets", "tags": ["datasets", "dataoceanai", "speechocean", "ASR", "TTS"]}
2023-10-11T09:31:30+00:00
[]
[]
TAGS #task_categories-conversational #task_categories-text-generation #license-unknown #datasets #dataoceanai #speechocean #ASR #TTS #region-us
# Introduction DataOcean AI (SHA stock code: 688787), founded in 2005, is one of the earliest AI training data solution providers in China. As the first listed enterprise in AI training data domestically, DataOcean AI is committed to providing AI datasets and services for AI enterprises and R&D institutions. DataOcean AI specializes in delivering comprehensive, multilingual, cross-domain, and multimodal AI datasets, along with a range of data-related services. Our offerings include data annotation, data collection, data design, and modal evaluation, catering to the diverse needs of enterprises across various industries. Our services encompass essential domains such as smart voice (including voice recognition and voice synthesis), computer vision, and natural language processing, spanning a wide array of approximately 200 primary languages and dialects from around the globe. DataOcean AI has been actively involved in the industry for nearly two decades and has developed close to 700 deep partnerships with leading IT companies, academic institutions, and emerging AI enterprises. It has delivered thousands of customized projects successfully and gained the deep trust of customers by focusing on competent, dependable, and safe data services. The company’s superior resources which cover 190+ languages and dialects in more than 70 countries, as well as its technologically leading algorithm R&D team and well-experienced project teams, are valuable assets of the company that contribute to the overall successful implementation of frontier AI projects around the world. ### Dataset Description - Curated by: DATAOCEAN AI - License: Commercial Check out the files or visit our website for details ## Contact You can alwasy contact us via email "contact@URL" or fill up the contact form in our website ' URL '
[ "# Introduction\n\n\nDataOcean AI (SHA stock code: 688787), founded in 2005, is one of the earliest AI training data solution providers in China.\n \nAs the first listed enterprise in AI training data domestically, DataOcean AI is committed to providing AI datasets and services for AI enterprises and R&D institutions.\n \nDataOcean AI specializes in delivering comprehensive, multilingual, cross-domain, and multimodal AI datasets, along with a range of data-related services. Our offerings include data annotation, data collection, data design, and modal evaluation, catering to the diverse needs of enterprises across various industries. Our services encompass essential domains such as smart voice (including voice recognition and voice synthesis), computer vision, and natural language processing, spanning a wide array of approximately 200 primary languages and dialects from around the globe.\n \nDataOcean AI has been actively involved in the industry for nearly two decades and has developed close to 700 deep partnerships with leading IT companies, academic institutions, and emerging AI enterprises. It has delivered thousands of customized projects successfully and gained the deep trust of customers by focusing on competent, dependable, and safe data services. The company’s superior resources which cover 190+ languages and dialects in more than 70 countries, as well as its technologically leading algorithm R&D team and well-experienced project teams, are valuable assets of the company that contribute to the overall successful implementation of frontier AI projects around the world.", "### Dataset Description\n\n\n\n\n\n- Curated by: DATAOCEAN AI\n- License: Commercial\n\nCheck out the files or visit our website for details", "## Contact\n\nYou can alwasy contact us via email \"contact@URL\" or fill up the contact form in our website ' URL '" ]
[ "TAGS\n#task_categories-conversational #task_categories-text-generation #license-unknown #datasets #dataoceanai #speechocean #ASR #TTS #region-us \n", "# Introduction\n\n\nDataOcean AI (SHA stock code: 688787), founded in 2005, is one of the earliest AI training data solution providers in China.\n \nAs the first listed enterprise in AI training data domestically, DataOcean AI is committed to providing AI datasets and services for AI enterprises and R&D institutions.\n \nDataOcean AI specializes in delivering comprehensive, multilingual, cross-domain, and multimodal AI datasets, along with a range of data-related services. Our offerings include data annotation, data collection, data design, and modal evaluation, catering to the diverse needs of enterprises across various industries. Our services encompass essential domains such as smart voice (including voice recognition and voice synthesis), computer vision, and natural language processing, spanning a wide array of approximately 200 primary languages and dialects from around the globe.\n \nDataOcean AI has been actively involved in the industry for nearly two decades and has developed close to 700 deep partnerships with leading IT companies, academic institutions, and emerging AI enterprises. It has delivered thousands of customized projects successfully and gained the deep trust of customers by focusing on competent, dependable, and safe data services. The company’s superior resources which cover 190+ languages and dialects in more than 70 countries, as well as its technologically leading algorithm R&D team and well-experienced project teams, are valuable assets of the company that contribute to the overall successful implementation of frontier AI projects around the world.", "### Dataset Description\n\n\n\n\n\n- Curated by: DATAOCEAN AI\n- License: Commercial\n\nCheck out the files or visit our website for details", "## Contact\n\nYou can alwasy contact us via email \"contact@URL\" or fill up the contact form in our website ' URL '" ]
[ 54, 339, 31, 28 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-generation #license-unknown #datasets #dataoceanai #speechocean #ASR #TTS #region-us \n# Introduction\n\n\nDataOcean AI (SHA stock code: 688787), founded in 2005, is one of the earliest AI training data solution providers in China.\n \nAs the first listed enterprise in AI training data domestically, DataOcean AI is committed to providing AI datasets and services for AI enterprises and R&D institutions.\n \nDataOcean AI specializes in delivering comprehensive, multilingual, cross-domain, and multimodal AI datasets, along with a range of data-related services. Our offerings include data annotation, data collection, data design, and modal evaluation, catering to the diverse needs of enterprises across various industries. Our services encompass essential domains such as smart voice (including voice recognition and voice synthesis), computer vision, and natural language processing, spanning a wide array of approximately 200 primary languages and dialects from around the globe.\n \nDataOcean AI has been actively involved in the industry for nearly two decades and has developed close to 700 deep partnerships with leading IT companies, academic institutions, and emerging AI enterprises. It has delivered thousands of customized projects successfully and gained the deep trust of customers by focusing on competent, dependable, and safe data services. The company’s superior resources which cover 190+ languages and dialects in more than 70 countries, as well as its technologically leading algorithm R&D team and well-experienced project teams, are valuable assets of the company that contribute to the overall successful implementation of frontier AI projects around the world.### Dataset Description\n\n\n\n\n\n- Curated by: DATAOCEAN AI\n- License: Commercial\n\nCheck out the files or visit our website for details## Contact\n\nYou can alwasy contact us via email \"contact@URL\" or fill up the contact form in our website ' URL '" ]
4660557f109857deb95cac0338404e3ed4e6e306
# Dataset Card for Evaluation run of Weyaxi/SlimOpenOrca-Mistral-7B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Weyaxi/SlimOpenOrca-Mistral-7B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Weyaxi/SlimOpenOrca-Mistral-7B](https://huggingface.co/Weyaxi/SlimOpenOrca-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Weyaxi__SlimOpenOrca-Mistral-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T00:40:26.410334](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__SlimOpenOrca-Mistral-7B/blob/main/results_2023-10-24T00-40-26.410334.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.004404362416107382, "em_stderr": 0.0006781451620479603, "f1": 0.0900964765100671, "f1_stderr": 0.001791740655538585, "acc": 0.494413205574767, "acc_stderr": 0.011528615182477716 }, "harness|drop|3": { "em": 0.004404362416107382, "em_stderr": 0.0006781451620479603, "f1": 0.0900964765100671, "f1_stderr": 0.001791740655538585 }, "harness|gsm8k|5": { "acc": 0.21455648218347234, "acc_stderr": 0.011307604104052887 }, "harness|winogrande|5": { "acc": 0.7742699289660616, "acc_stderr": 0.011749626260902547 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Weyaxi__SlimOpenOrca-Mistral-7B
[ "region:us" ]
2023-10-11T09:05:06+00:00
{"pretty_name": "Evaluation run of Weyaxi/SlimOpenOrca-Mistral-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/SlimOpenOrca-Mistral-7B](https://huggingface.co/Weyaxi/SlimOpenOrca-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__SlimOpenOrca-Mistral-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T00:40:26.410334](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__SlimOpenOrca-Mistral-7B/blob/main/results_2023-10-24T00-40-26.410334.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004404362416107382,\n \"em_stderr\": 0.0006781451620479603,\n \"f1\": 0.0900964765100671,\n \"f1_stderr\": 0.001791740655538585,\n \"acc\": 0.494413205574767,\n \"acc_stderr\": 0.011528615182477716\n },\n \"harness|drop|3\": {\n \"em\": 0.004404362416107382,\n \"em_stderr\": 0.0006781451620479603,\n \"f1\": 0.0900964765100671,\n \"f1_stderr\": 0.001791740655538585\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.21455648218347234,\n \"acc_stderr\": 0.011307604104052887\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7742699289660616,\n \"acc_stderr\": 0.011749626260902547\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/SlimOpenOrca-Mistral-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|arc:challenge|25_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T00_40_26.410334", "path": ["**/details_harness|drop|3_2023-10-24T00-40-26.410334.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T00-40-26.410334.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T00_40_26.410334", "path": ["**/details_harness|gsm8k|5_2023-10-24T00-40-26.410334.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T00-40-26.410334.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hellaswag|10_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T10-04-43.187576.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T10-04-43.187576.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T10-04-43.187576.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T00_40_26.410334", "path": ["**/details_harness|winogrande|5_2023-10-24T00-40-26.410334.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T00-40-26.410334.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T10_04_43.187576", "path": ["results_2023-10-11T10-04-43.187576.parquet"]}, {"split": "2023_10_24T00_40_26.410334", "path": ["results_2023-10-24T00-40-26.410334.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T00-40-26.410334.parquet"]}]}]}
2023-10-23T23:40:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Weyaxi/SlimOpenOrca-Mistral-7B ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Weyaxi/SlimOpenOrca-Mistral-7B on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T00:40:26.410334(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Weyaxi/SlimOpenOrca-Mistral-7B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/SlimOpenOrca-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T00:40:26.410334(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Weyaxi/SlimOpenOrca-Mistral-7B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/SlimOpenOrca-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T00:40:26.410334(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 23, 31, 171, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/SlimOpenOrca-Mistral-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/SlimOpenOrca-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T00:40:26.410334(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
ba2273259673de2fbfa12cc3fdb0d443f34e672a
# Dataset Card for "book_data_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tinhpx2911/book_data_processed
[ "region:us" ]
2023-10-11T09:05:13+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7993647281, "num_examples": 14492}], "download_size": 3225112197, "dataset_size": 7993647281}}
2023-10-11T13:53:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "book_data_processed" More Information needed
[ "# Dataset Card for \"book_data_processed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"book_data_processed\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"book_data_processed\"\n\nMore Information needed" ]
be23e7b0ea09ffd4260444685192dcb6b9144104
# Dataset Card for "kannada_asr_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TheAIchemist13/kannada_asr_dataset
[ "region:us" ]
2023-10-11T09:10:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcriptions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20136755.0, "num_examples": 81}, {"name": "test", "num_bytes": 20136755.0, "num_examples": 81}], "download_size": 38875566, "dataset_size": 40273510.0}}
2023-10-13T05:22:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "kannada_asr_dataset" More Information needed
[ "# Dataset Card for \"kannada_asr_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"kannada_asr_dataset\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"kannada_asr_dataset\"\n\nMore Information needed" ]
01956fee1b5901576e0f0d0cf2819f2cfef80f2a
# Dataset Card for "vanhoc_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tinhpx2911/vanhoc_processed
[ "region:us" ]
2023-10-11T09:11:52+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 161543279, "num_examples": 28242}], "download_size": 81656333, "dataset_size": 161543279}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T09:12:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vanhoc_processed" More Information needed
[ "# Dataset Card for \"vanhoc_processed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vanhoc_processed\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vanhoc_processed\"\n\nMore Information needed" ]
248a10c4623768edf0a1e71d21f358bc2e19b360
# Dataset Card for "vi_dataset_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tinhpx2911/vi_dataset_processed
[ "region:us" ]
2023-10-11T09:19:44+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96539203, "num_examples": 11523}], "download_size": 48657504, "dataset_size": 96539203}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T09:20:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vi_dataset_processed" More Information needed
[ "# Dataset Card for \"vi_dataset_processed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vi_dataset_processed\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vi_dataset_processed\"\n\nMore Information needed" ]
6e7032e36cf09c8dc9021868b4daeba7a9a794e0
# Dataset Card for "oasst1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ArmelRandy/oasst1
[ "region:us" ]
2023-10-11T09:30:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10420145.1, "num_examples": 8784}, {"name": "test", "num_bytes": 1157793.9, "num_examples": 976}], "download_size": 7025520, "dataset_size": 11577939.0}}
2023-10-11T09:30:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "oasst1" More Information needed
[ "# Dataset Card for \"oasst1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"oasst1\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"oasst1\"\n\nMore Information needed" ]
5b0961fbaa6d7f9c344c5d59c29943fb900c2eca
### Reference: - "A Question-Entailment Approach to Question Answering". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019.
keivalya/MedQuad-MedicalQnADataset
[ "task_categories:question-answering", "task_categories:text2text-generation", "region:us" ]
2023-10-11T09:38:26+00:00
{"task_categories": ["question-answering", "text2text-generation"], "pretty_name": "MedQuad-KV"}
2023-10-11T09:50:41+00:00
[]
[]
TAGS #task_categories-question-answering #task_categories-text2text-generation #region-us
### Reference: - "A Question-Entailment Approach to Question Answering". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019.
[ "### Reference:\n- \"A Question-Entailment Approach to Question Answering\". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text2text-generation #region-us \n", "### Reference:\n- \"A Question-Entailment Approach to Question Answering\". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019." ]
[ 31, 41 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text2text-generation #region-us \n### Reference:\n- \"A Question-Entailment Approach to Question Answering\". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019." ]
337f3007ffffe58d46513f8134bec684a311793d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: thainq107/bert-base-banking77-pt2 * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@cnxt](https://huggingface.co/cnxt) for evaluating this model.
autoevaluate/autoeval-eval-banking77-default-c7e778-94421146088
[ "autotrain", "evaluation", "region:us" ]
2023-10-11T09:38:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "thainq107/bert-base-banking77-pt2", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2023-10-11T09:39:14+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: thainq107/bert-base-banking77-pt2 * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @cnxt for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: thainq107/bert-base-banking77-pt2\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @cnxt for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: thainq107/bert-base-banking77-pt2\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @cnxt for evaluating this model." ]
[ 13, 93, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: thainq107/bert-base-banking77-pt2\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @cnxt for evaluating this model." ]
9b87d90e6720a7e071d302fafe0d4264c4e0cf46
# Dataset Card for "pubmed_preprocess" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rntc/pubmed_preprocess
[ "region:us" ]
2023-10-11T09:40:27+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "fr", "path": "data/fr-*"}, {"split": "en", "path": "data/en-*"}, {"split": "es", "path": "data/es-*"}, {"split": "de", "path": "data/de-*"}, {"split": "it", "path": "data/it-*"}, {"split": "nl", "path": "data/nl-*"}, {"split": "pl", "path": "data/pl-*"}, {"split": "pt", "path": "data/pt-*"}, {"split": "ro", "path": "data/ro-*"}, {"split": "ru", "path": "data/ru-*"}, {"split": "zh", "path": "data/zh-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "fr", "num_bytes": 30582169, "num_examples": 28715}, {"name": "en", "num_bytes": 90868163767, "num_examples": 97816514}, {"name": "es", "num_bytes": 9925215, "num_examples": 14671}, {"name": "de", "num_bytes": 46540591, "num_examples": 53202}, {"name": "it", "num_bytes": 79767, "num_examples": 125}, {"name": "nl", "num_bytes": 373829, "num_examples": 461}, {"name": "pl", "num_bytes": 727984, "num_examples": 877}, {"name": "pt", "num_bytes": 29942156, "num_examples": 44558}, {"name": "ro", "num_bytes": 103813, "num_examples": 187}, {"name": "ru", "num_bytes": 2320647, "num_examples": 1671}, {"name": "zh", "num_bytes": 11481632, "num_examples": 10612}], "download_size": 302082086, "dataset_size": 91000241570}}
2023-10-11T12:15:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pubmed_preprocess" More Information needed
[ "# Dataset Card for \"pubmed_preprocess\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pubmed_preprocess\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_preprocess\"\n\nMore Information needed" ]
e6eb855977a9fa86a290f4b7afbdfa600e4b9d02
# Dataset Card for "EN_CW" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kamyar-zeinalipour/EN_CW
[ "region:us" ]
2023-10-11T09:43:09+00:00
{"dataset_info": {"features": [{"name": "date", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "clue", "dtype": "string"}, {"name": "partial", "dtype": "bool"}, {"name": "couple_occurencies", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 387434957, "num_examples": 7327448}], "download_size": 188270614, "dataset_size": 387434957}}
2023-10-11T11:55:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "EN_CW" More Information needed
[ "# Dataset Card for \"EN_CW\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"EN_CW\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"EN_CW\"\n\nMore Information needed" ]
7540d07fa06402fb1364e6e69c1b57ef62220bbf
# Dataset Card for "agile_1_line" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nalmeida/agile_1_line
[ "region:us" ]
2023-10-11T09:49:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3073, "num_examples": 1}], "download_size": 21657, "dataset_size": 3073}}
2023-10-11T09:49:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "agile_1_line" More Information needed
[ "# Dataset Card for \"agile_1_line\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"agile_1_line\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"agile_1_line\"\n\nMore Information needed" ]
90a4d34b6e56e14cadc9f7daaa0e1190f7ae4528
# Dataset Card for "demo_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
daspartho/demo_dataset
[ "region:us" ]
2023-10-11T10:17:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1634, "num_examples": 25}], "download_size": 2287, "dataset_size": 1634}}
2023-10-11T10:17:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "demo_dataset" More Information needed
[ "# Dataset Card for \"demo_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"demo_dataset\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"demo_dataset\"\n\nMore Information needed" ]
b40ef58323f1776141e7d19e2f92cc9baf34acc4
# Dataset Card for "spoiler_or_not" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
daspartho/spoiler_or_not
[ "region:us" ]
2023-10-11T10:45:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1657, "num_examples": 25}], "download_size": 2423, "dataset_size": 1657}}
2023-10-11T10:50:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spoiler_or_not" More Information needed
[ "# Dataset Card for \"spoiler_or_not\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spoiler_or_not\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spoiler_or_not\"\n\nMore Information needed" ]
d4c31132c5a470a71af38bf1bbda07fc834be634
# Dataset Card for "testing" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
makram93/testing
[ "region:us" ]
2023-10-11T10:56:27+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "doc_id", "dtype": "string"}, {"name": "title", "sequence": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 679123, "num_examples": 824}], "download_size": 388552, "dataset_size": 679123}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T10:56:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "testing" More Information needed
[ "# Dataset Card for \"testing\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"testing\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"testing\"\n\nMore Information needed" ]
ccfc49f7664a604d1cae38495558007e97589e9a
# Dataset Card for "laion2b_seed" This dataset is a subset of [laion2B-en-aesthetic](https://huggingface.co/datasets/laion/laion2B-en-aesthetic), with SEED v1 tokens.
liangyuch/laion2b_seed
[ "region:us" ]
2023-10-11T11:04:00+00:00
{"dataset_info": {"features": [{"name": "WIDTH", "dtype": "float64"}, {"name": "HEIGHT", "dtype": "float64"}, {"name": "similarity", "dtype": "float64"}, {"name": "punsafe", "dtype": "float64"}, {"name": "pwatermark", "dtype": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "key", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "error_message", "dtype": "null"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "original_width", "dtype": "int64"}, {"name": "original_height", "dtype": "int64"}, {"name": "exif", "dtype": "string"}, {"name": "sha256", "dtype": "string"}, {"name": "seed", "sequence": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 134751345442, "num_examples": 172871223}], "download_size": 3195319434, "dataset_size": 134751345442}}
2023-10-20T05:16:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "laion2b_seed" This dataset is a subset of laion2B-en-aesthetic, with SEED v1 tokens.
[ "# Dataset Card for \"laion2b_seed\"\n\nThis dataset is a subset of laion2B-en-aesthetic, with SEED v1 tokens." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"laion2b_seed\"\n\nThis dataset is a subset of laion2B-en-aesthetic, with SEED v1 tokens." ]
[ 6, 41 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"laion2b_seed\"\n\nThis dataset is a subset of laion2B-en-aesthetic, with SEED v1 tokens." ]
159dbbd5d30cdb912d9fab543ae122118cdcec91
# Dataset Card for "Codemix_tamil_english_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/Codemix_tamil_english_train
[ "region:us" ]
2023-10-11T11:09:01+00:00
{"dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1934489.5931346258, "num_examples": 25840}], "download_size": 1135012, "dataset_size": 1934489.5931346258}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T11:09:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Codemix_tamil_english_train" More Information needed
[ "# Dataset Card for \"Codemix_tamil_english_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Codemix_tamil_english_train\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Codemix_tamil_english_train\"\n\nMore Information needed" ]
c6af1b820003a73d4712ef91f4dbc9b8e79ae324
# Dataset Card for "Codemix_tamil_english_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/Codemix_tamil_english_test
[ "region:us" ]
2023-10-11T11:09:03+00:00
{"dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19614.4068653743, "num_examples": 262}], "download_size": 13483, "dataset_size": 19614.4068653743}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T11:09:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Codemix_tamil_english_test" More Information needed
[ "# Dataset Card for \"Codemix_tamil_english_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Codemix_tamil_english_test\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Codemix_tamil_english_test\"\n\nMore Information needed" ]
a05c20c44e1fd1c91b62c1aeb8ac9413a381dea6
# Dataset Card for "abstracts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Eitanli/abstracts
[ "region:us" ]
2023-10-11T11:09:52+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "recall", "dtype": "int64"}, {"name": "article_title", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 232927086.52719492, "num_examples": 135922}, {"name": "test", "num_bytes": 29117171.077408876, "num_examples": 16991}, {"name": "valid", "num_bytes": 29115457.395396195, "num_examples": 16990}], "download_size": 157551845, "dataset_size": 291159715.0}}
2023-10-11T11:10:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "abstracts" More Information needed
[ "# Dataset Card for \"abstracts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"abstracts\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"abstracts\"\n\nMore Information needed" ]
4d9ab45e151e33be5dd18080abc5c0505754201e
# Dataset Card for "spotlight-mnist-enrichment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
renumics/spotlight-mnist-enrichment
[ "region:us" ]
2023-10-11T11:10:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 480000, "num_examples": 60000}, {"name": "test", "num_bytes": 80000, "num_examples": 10000}], "download_size": 778435, "dataset_size": 560000}}
2023-10-13T08:42:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spotlight-mnist-enrichment" More Information needed
[ "# Dataset Card for \"spotlight-mnist-enrichment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spotlight-mnist-enrichment\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-mnist-enrichment\"\n\nMore Information needed" ]
7b334f72d3d172aa3a5050573a5d41b8fcfcd093
# Dataset Card for "autotrain_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rageshhf/autotrain_data
[ "region:us" ]
2023-10-11T11:14:43+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5847564, "num_examples": 3283}], "download_size": 1672878, "dataset_size": 5847564}}
2023-10-11T11:14:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotrain_data" More Information needed
[ "# Dataset Card for \"autotrain_data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotrain_data\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotrain_data\"\n\nMore Information needed" ]
a847b8a4aa900beee3a0c29a8abee65e802d5919
# Dataset Card for Bokmål-Nynorsk Translation ## Dataset Summary This dataset is intended for language translation for Bokmål to Nynorsk and vice versa. It contains 800,000 sentence pairs, sourced from Språkbanken and pruned to avoid overlap with the NorBench dataset. The data comes from translations of news text from Norsk telegrambyrå (NTB), performed by Nynorsk pressekontor (NPK). In addition the dev and test set has 1000 entries. ## Data Collection - **Period**: February 2011 to December 2022 - **Source**: [Omsetjingsminne Nynorsk Pressekontor - Språkbanken](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-80/) - **Size**: 800,000 sentence pairs - **Format**: JSON-lines (with `nob` , `nno` fields) ### Processing Steps 1. Pruned to avoid overlap with NorBench 2. Deduplicated 3. Shuffled with a fixed seed (42) ## Usage Intended for training Bokmål-Nynorsk translation models. For more details, refer to the repository where the dataset preparation script and the actual dataset reside.
NbAiLab/nbnn_translation
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:nb", "language:no", "language:nn", "license:apache-2.0", "region:us" ]
2023-10-11T11:49:59+00:00
{"language": ["nb", "no", "nn"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.jsonl"}, {"split": "dev", "path": "dev.jsonl"}, {"split": "test", "path": "test.jsonl"}]}]}
2023-10-11T16:12:18+00:00
[]
[ "nb", "no", "nn" ]
TAGS #task_categories-text-classification #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #license-apache-2.0 #region-us
# Dataset Card for Bokmål-Nynorsk Translation ## Dataset Summary This dataset is intended for language translation for Bokmål to Nynorsk and vice versa. It contains 800,000 sentence pairs, sourced from Språkbanken and pruned to avoid overlap with the NorBench dataset. The data comes from translations of news text from Norsk telegrambyrå (NTB), performed by Nynorsk pressekontor (NPK). In addition the dev and test set has 1000 entries. ## Data Collection - Period: February 2011 to December 2022 - Source: Omsetjingsminne Nynorsk Pressekontor - Språkbanken - Size: 800,000 sentence pairs - Format: JSON-lines (with 'nob' , 'nno' fields) ### Processing Steps 1. Pruned to avoid overlap with NorBench 2. Deduplicated 3. Shuffled with a fixed seed (42) ## Usage Intended for training Bokmål-Nynorsk translation models. For more details, refer to the repository where the dataset preparation script and the actual dataset reside.
[ "# Dataset Card for Bokmål-Nynorsk Translation", "## Dataset Summary\n\nThis dataset is intended for language translation for Bokmål to Nynorsk and vice versa. It contains 800,000 sentence pairs, sourced from Språkbanken and pruned to avoid overlap with the NorBench dataset. The data comes from translations of news text from Norsk telegrambyrå (NTB), performed by Nynorsk pressekontor (NPK). In addition the dev and test set has 1000 entries.", "## Data Collection\n\n- Period: February 2011 to December 2022\n- Source: Omsetjingsminne Nynorsk Pressekontor - Språkbanken\n- Size: 800,000 sentence pairs\n- Format: JSON-lines (with 'nob' , 'nno' fields)", "### Processing Steps\n\n1. Pruned to avoid overlap with NorBench\n2. Deduplicated\n3. Shuffled with a fixed seed (42)", "## Usage\n\nIntended for training Bokmål-Nynorsk translation models. For more details, refer to the repository where the dataset preparation script and the actual dataset reside." ]
[ "TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #license-apache-2.0 #region-us \n", "# Dataset Card for Bokmål-Nynorsk Translation", "## Dataset Summary\n\nThis dataset is intended for language translation for Bokmål to Nynorsk and vice versa. It contains 800,000 sentence pairs, sourced from Språkbanken and pruned to avoid overlap with the NorBench dataset. The data comes from translations of news text from Norsk telegrambyrå (NTB), performed by Nynorsk pressekontor (NPK). In addition the dev and test set has 1000 entries.", "## Data Collection\n\n- Period: February 2011 to December 2022\n- Source: Omsetjingsminne Nynorsk Pressekontor - Språkbanken\n- Size: 800,000 sentence pairs\n- Format: JSON-lines (with 'nob' , 'nno' fields)", "### Processing Steps\n\n1. Pruned to avoid overlap with NorBench\n2. Deduplicated\n3. Shuffled with a fixed seed (42)", "## Usage\n\nIntended for training Bokmål-Nynorsk translation models. For more details, refer to the repository where the dataset preparation script and the actual dataset reside." ]
[ 59, 11, 94, 58, 32, 39 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #license-apache-2.0 #region-us \n# Dataset Card for Bokmål-Nynorsk Translation## Dataset Summary\n\nThis dataset is intended for language translation for Bokmål to Nynorsk and vice versa. It contains 800,000 sentence pairs, sourced from Språkbanken and pruned to avoid overlap with the NorBench dataset. The data comes from translations of news text from Norsk telegrambyrå (NTB), performed by Nynorsk pressekontor (NPK). In addition the dev and test set has 1000 entries.## Data Collection\n\n- Period: February 2011 to December 2022\n- Source: Omsetjingsminne Nynorsk Pressekontor - Språkbanken\n- Size: 800,000 sentence pairs\n- Format: JSON-lines (with 'nob' , 'nno' fields)### Processing Steps\n\n1. Pruned to avoid overlap with NorBench\n2. Deduplicated\n3. Shuffled with a fixed seed (42)## Usage\n\nIntended for training Bokmål-Nynorsk translation models. For more details, refer to the repository where the dataset preparation script and the actual dataset reside." ]
88af776495faeda30d632bd3d1cc023ce6e06988
# Deidentified Medical Charts with Human Curated Explanations *About* This dataset is a small sample from the EHR dataset used by experiments described in our paper, "Speeding up LIME with Attention Weights," submitted to CoDS-COMAD 2024.
BUDDI-AI/Speeding-up-LIME
[ "language:en", "license:cc-by-nc-nd-4.0", "region:us" ]
2023-10-11T11:51:00+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "pretty_name": "b"}
2023-10-11T13:47:26+00:00
[]
[ "en" ]
TAGS #language-English #license-cc-by-nc-nd-4.0 #region-us
# Deidentified Medical Charts with Human Curated Explanations *About* This dataset is a small sample from the EHR dataset used by experiments described in our paper, "Speeding up LIME with Attention Weights," submitted to CoDS-COMAD 2024.
[ "# Deidentified Medical Charts with Human Curated Explanations\n\n*About*\nThis dataset is a small sample from the EHR dataset used by experiments described in our paper, \"Speeding up LIME with Attention Weights,\" submitted to CoDS-COMAD 2024." ]
[ "TAGS\n#language-English #license-cc-by-nc-nd-4.0 #region-us \n", "# Deidentified Medical Charts with Human Curated Explanations\n\n*About*\nThis dataset is a small sample from the EHR dataset used by experiments described in our paper, \"Speeding up LIME with Attention Weights,\" submitted to CoDS-COMAD 2024." ]
[ 23, 61 ]
[ "passage: TAGS\n#language-English #license-cc-by-nc-nd-4.0 #region-us \n# Deidentified Medical Charts with Human Curated Explanations\n\n*About*\nThis dataset is a small sample from the EHR dataset used by experiments described in our paper, \"Speeding up LIME with Attention Weights,\" submitted to CoDS-COMAD 2024." ]
fbbdd802e0204487db32806930151ac5e023392a
# Dataset Card for "t2i-cho-ben-thanh" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bellagio-ai/t2i-cho-ben-thanh
[ "region:us" ]
2023-10-11T11:51:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10324665.0, "num_examples": 31}], "download_size": 10264144, "dataset_size": 10324665.0}}
2023-10-11T11:51:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "t2i-cho-ben-thanh" More Information needed
[ "# Dataset Card for \"t2i-cho-ben-thanh\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"t2i-cho-ben-thanh\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"t2i-cho-ben-thanh\"\n\nMore Information needed" ]
6773353a359ada315b1015b5fcf8584d9a73c085
# Dataset Card for Evaluation run of lgaalves/mistral-7b-platypus1k ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/lgaalves/mistral-7b-platypus1k - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [lgaalves/mistral-7b-platypus1k](https://huggingface.co/lgaalves/mistral-7b-platypus1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_lgaalves__mistral-7b-platypus1k_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-06T16:30:05.854824](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__mistral-7b-platypus1k_public/blob/main/results_2023-11-06T16-30-05.854824.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0018875838926174498, "em_stderr": 0.0004445109990558977, "f1": 0.05987311241610734, "f1_stderr": 0.001362358723340712, "acc": 0.4725668736869253, "acc_stderr": 0.010904717715097085 }, "harness|drop|3": { "em": 0.0018875838926174498, "em_stderr": 0.0004445109990558977, "f1": 0.05987311241610734, "f1_stderr": 0.001362358723340712 }, "harness|gsm8k|5": { "acc": 0.16376042456406367, "acc_stderr": 0.010193237214420947 }, "harness|winogrande|5": { "acc": 0.7813733228097869, "acc_stderr": 0.011616198215773223 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_lgaalves__mistral-7b-platypus1k
[ "region:us" ]
2023-10-11T11:59:12+00:00
{"pretty_name": "Evaluation run of lgaalves/mistral-7b-platypus1k", "dataset_summary": "Dataset automatically created during the evaluation run of model [lgaalves/mistral-7b-platypus1k](https://huggingface.co/lgaalves/mistral-7b-platypus1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lgaalves__mistral-7b-platypus1k_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-06T16:30:05.854824](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__mistral-7b-platypus1k_public/blob/main/results_2023-11-06T16-30-05.854824.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558977,\n \"f1\": 0.05987311241610734,\n \"f1_stderr\": 0.001362358723340712,\n \"acc\": 0.4725668736869253,\n \"acc_stderr\": 0.010904717715097085\n },\n \"harness|drop|3\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558977,\n \"f1\": 0.05987311241610734,\n \"f1_stderr\": 0.001362358723340712\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16376042456406367,\n \"acc_stderr\": 0.010193237214420947\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7813733228097869,\n \"acc_stderr\": 0.011616198215773223\n }\n}\n```", "repo_url": "https://huggingface.co/lgaalves/mistral-7b-platypus1k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_04T18_46_29.797939", "path": ["**/details_harness|drop|3_2023-11-04T18-46-29.797939.parquet"]}, {"split": "2023_11_06T16_30_05.854824", "path": ["**/details_harness|drop|3_2023-11-06T16-30-05.854824.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-06T16-30-05.854824.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_04T18_46_29.797939", "path": ["**/details_harness|gsm8k|5_2023-11-04T18-46-29.797939.parquet"]}, {"split": "2023_11_06T16_30_05.854824", "path": ["**/details_harness|gsm8k|5_2023-11-06T16-30-05.854824.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-06T16-30-05.854824.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_04T18_46_29.797939", "path": ["**/details_harness|winogrande|5_2023-11-04T18-46-29.797939.parquet"]}, {"split": "2023_11_06T16_30_05.854824", "path": ["**/details_harness|winogrande|5_2023-11-06T16-30-05.854824.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-06T16-30-05.854824.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_04T18_46_29.797939", "path": ["results_2023-11-04T18-46-29.797939.parquet"]}, {"split": "2023_11_06T16_30_05.854824", "path": ["results_2023-11-06T16-30-05.854824.parquet"]}, {"split": "latest", "path": ["results_2023-11-06T16-30-05.854824.parquet"]}]}]}
2023-12-01T14:04:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of lgaalves/mistral-7b-platypus1k ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model lgaalves/mistral-7b-platypus1k on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-06T16:30:05.854824(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of lgaalves/mistral-7b-platypus1k", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/mistral-7b-platypus1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-06T16:30:05.854824(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of lgaalves/mistral-7b-platypus1k", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/mistral-7b-platypus1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-06T16:30:05.854824(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 24, 31, 173, 68, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lgaalves/mistral-7b-platypus1k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/mistral-7b-platypus1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-06T16:30:05.854824(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
7750b56ca2f0318badab524fecfb5debaa5331ed
def basic(array1): x=(array1[0]-.5) y=(array1[1]-.5) z=(array1[2]-.5) t=(array1[3]-.5) r2=x*x+y*y+z*z+t*t return 3*np.sin(r2)+np.random.random()*array1[4] f=np.apply_along_axis(basic, 1, a)
wlaminack/Nonlinearltestingdataset
[ "license:apache-2.0", "region:us" ]
2023-10-11T12:16:38+00:00
{"license": "apache-2.0"}
2023-10-11T12:34:00+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
def basic(array1): x=(array1[0]-.5) y=(array1[1]-.5) z=(array1[2]-.5) t=(array1[3]-.5) r2=x*x+y*y+z*z+t*t return 3*URL(r2)+URL()*array1[4] f=np.apply_along_axis(basic, 1, a)
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
2ec491fdfbb9d490e107df02e8dae5eaccd68879
# Dataset Card for "t2i-one-pillar-pagoda" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bellagio-ai/t2i-one-pillar-pagoda
[ "region:us" ]
2023-10-11T12:37:13+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10724002.0, "num_examples": 27}], "download_size": 10667654, "dataset_size": 10724002.0}}
2023-10-11T12:37:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "t2i-one-pillar-pagoda" More Information needed
[ "# Dataset Card for \"t2i-one-pillar-pagoda\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"t2i-one-pillar-pagoda\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"t2i-one-pillar-pagoda\"\n\nMore Information needed" ]
f99b6fdc4b3ea019b70c6079d1a559f1a05cb0c8
# Dataset Card for "c4-chinese-zhtw" ## 內容 Common Crawl 是一個非營利組織,負責抓取網路並向公眾免費提供其檔案和資料集。Common Crawl 的網路檔案包含自 2008 年以來收集的 PB 級資料。它一般每月完成一次抓取。 Common Crawl 的爬蟲程式遵守 nofollow 和 robots.txt 政策。用於處理 Common Crawl 資料集的開源程式碼是公開可用的。 這個繁中的數據來是來自 [Common Crawl](https://commoncrawl.org/overview) **2023-14** 的 data archive 下載并進行清理 。 這是 [jed351](https://huggingface.co/jed351) 準備的版本,託管在這個位址: - https://huggingface.co/datasets/jed351/Traditional-Chinese-Common-Crawl-Filtered ## 支援的任務 C4主要用於預訓練語言模型(pretrain language model)。 ## 範例 一個樣本的範例: ``` { 'url': 'http://www.bilingtong.com/cpzx/96.html', 'timestamp': '2023-03-21 02:12:48', 'content_language': 'zho', 'content_type': 'text/plain', 'text': '新風系統是通過系統設計送風和排風使室內空氣存在一空氣 。無需開窗全天持續不斷有組.....' } ``` ## 資料欄位 資料有幾個欄位: - `url`: 來源 url - `timestamp`: 時間戳 - `content_language`: 內容包含的語言種類 - `content_type`: 內容類型,也稱為 MIME 或媒體類型,是 Web 伺服器回應標頭中的聲明 - `text`:網頁清理後的文字內容 ## 數據清理 請參考在 Github 上的專案 [c4-dataset-script](https://github.com/jedcheng/c4-dataset-script) 來了解數據下載與清理的相關邏輯與程式碼。 主要的步驟有: 1. Download the WET crawl archive index file 2. Run download and Chinese screening script on Spark 3. Filter out non-sentence lines and toxic document 4. Remove duplicated text 5. Remove documents that are over self-repeating - Repetition Removal in DeepMind MassiveText ## 許可資訊 請尊循 Common Craw terms of use 的條款。 - https://commoncrawl.org/terms-of-use
erhwenkuo/c4-chinese-zhtw
[ "task_categories:text-generation", "task_categories:fill-mask", "size_categories:1M<n<10M", "language:zh", "region:us" ]
2023-10-11T12:39:56+00:00
{"language": ["zh"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "fill-mask"], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "content_language", "dtype": "string"}, {"name": "content_type", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12480603148, "num_examples": 2967556}], "download_size": 8659425404, "dataset_size": 12480603148}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-12T03:00:07+00:00
[]
[ "zh" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Chinese #region-us
# Dataset Card for "c4-chinese-zhtw" ## 內容 Common Crawl 是一個非營利組織,負責抓取網路並向公眾免費提供其檔案和資料集。Common Crawl 的網路檔案包含自 2008 年以來收集的 PB 級資料。它一般每月完成一次抓取。 Common Crawl 的爬蟲程式遵守 nofollow 和 URL 政策。用於處理 Common Crawl 資料集的開源程式碼是公開可用的。 這個繁中的數據來是來自 Common Crawl 2023-14 的 data archive 下載并進行清理 。 這是 jed351 準備的版本,託管在這個位址: - URL ## 支援的任務 C4主要用於預訓練語言模型(pretrain language model)。 ## 範例 一個樣本的範例: ## 資料欄位 資料有幾個欄位: - 'url': 來源 url - 'timestamp': 時間戳 - 'content_language': 內容包含的語言種類 - 'content_type': 內容類型,也稱為 MIME 或媒體類型,是 Web 伺服器回應標頭中的聲明 - 'text':網頁清理後的文字內容 ## 數據清理 請參考在 Github 上的專案 c4-dataset-script 來了解數據下載與清理的相關邏輯與程式碼。 主要的步驟有: 1. Download the WET crawl archive index file 2. Run download and Chinese screening script on Spark 3. Filter out non-sentence lines and toxic document 4. Remove duplicated text 5. Remove documents that are over self-repeating - Repetition Removal in DeepMind MassiveText ## 許可資訊 請尊循 Common Craw terms of use 的條款。 - URL
[ "# Dataset Card for \"c4-chinese-zhtw\"", "## 內容\n\nCommon Crawl 是一個非營利組織,負責抓取網路並向公眾免費提供其檔案和資料集。Common Crawl 的網路檔案包含自 2008 年以來收集的 PB 級資料。它一般每月完成一次抓取。\n\nCommon Crawl 的爬蟲程式遵守 nofollow 和 URL 政策。用於處理 Common Crawl 資料集的開源程式碼是公開可用的。\n\n這個繁中的數據來是來自 Common Crawl 2023-14 的 data archive 下載并進行清理 。\n\n這是 jed351 準備的版本,託管在這個位址:\n\n- URL", "## 支援的任務\n\nC4主要用於預訓練語言模型(pretrain language model)。", "## 範例\n\n一個樣本的範例:", "## 資料欄位\n\n資料有幾個欄位:\n\n- 'url': 來源 url\n- 'timestamp': 時間戳\n- 'content_language': 內容包含的語言種類\n- 'content_type': 內容類型,也稱為 MIME 或媒體類型,是 Web 伺服器回應標頭中的聲明\n- 'text':網頁清理後的文字內容", "## 數據清理\n\n請參考在 Github 上的專案 c4-dataset-script 來了解數據下載與清理的相關邏輯與程式碼。\n\n主要的步驟有:\n\n1. Download the WET crawl archive index file\n2. Run download and Chinese screening script on Spark\n3. Filter out non-sentence lines and toxic document\n4. Remove duplicated text\n5. Remove documents that are over self-repeating - Repetition Removal in DeepMind MassiveText", "## 許可資訊\n\n請尊循 Common Craw terms of use 的條款。\n\n- URL" ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Chinese #region-us \n", "# Dataset Card for \"c4-chinese-zhtw\"", "## 內容\n\nCommon Crawl 是一個非營利組織,負責抓取網路並向公眾免費提供其檔案和資料集。Common Crawl 的網路檔案包含自 2008 年以來收集的 PB 級資料。它一般每月完成一次抓取。\n\nCommon Crawl 的爬蟲程式遵守 nofollow 和 URL 政策。用於處理 Common Crawl 資料集的開源程式碼是公開可用的。\n\n這個繁中的數據來是來自 Common Crawl 2023-14 的 data archive 下載并進行清理 。\n\n這是 jed351 準備的版本,託管在這個位址:\n\n- URL", "## 支援的任務\n\nC4主要用於預訓練語言模型(pretrain language model)。", "## 範例\n\n一個樣本的範例:", "## 資料欄位\n\n資料有幾個欄位:\n\n- 'url': 來源 url\n- 'timestamp': 時間戳\n- 'content_language': 內容包含的語言種類\n- 'content_type': 內容類型,也稱為 MIME 或媒體類型,是 Web 伺服器回應標頭中的聲明\n- 'text':網頁清理後的文字內容", "## 數據清理\n\n請參考在 Github 上的專案 c4-dataset-script 來了解數據下載與清理的相關邏輯與程式碼。\n\n主要的步驟有:\n\n1. Download the WET crawl archive index file\n2. Run download and Chinese screening script on Spark\n3. Filter out non-sentence lines and toxic document\n4. Remove duplicated text\n5. Remove documents that are over self-repeating - Repetition Removal in DeepMind MassiveText", "## 許可資訊\n\n請尊循 Common Craw terms of use 的條款。\n\n- URL" ]
[ 45, 14, 136, 19, 12, 83, 100, 20 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Chinese #region-us \n# Dataset Card for \"c4-chinese-zhtw\"## 內容\n\nCommon Crawl 是一個非營利組織,負責抓取網路並向公眾免費提供其檔案和資料集。Common Crawl 的網路檔案包含自 2008 年以來收集的 PB 級資料。它一般每月完成一次抓取。\n\nCommon Crawl 的爬蟲程式遵守 nofollow 和 URL 政策。用於處理 Common Crawl 資料集的開源程式碼是公開可用的。\n\n這個繁中的數據來是來自 Common Crawl 2023-14 的 data archive 下載并進行清理 。\n\n這是 jed351 準備的版本,託管在這個位址:\n\n- URL## 支援的任務\n\nC4主要用於預訓練語言模型(pretrain language model)。## 範例\n\n一個樣本的範例:## 資料欄位\n\n資料有幾個欄位:\n\n- 'url': 來源 url\n- 'timestamp': 時間戳\n- 'content_language': 內容包含的語言種類\n- 'content_type': 內容類型,也稱為 MIME 或媒體類型,是 Web 伺服器回應標頭中的聲明\n- 'text':網頁清理後的文字內容## 數據清理\n\n請參考在 Github 上的專案 c4-dataset-script 來了解數據下載與清理的相關邏輯與程式碼。\n\n主要的步驟有:\n\n1. Download the WET crawl archive index file\n2. Run download and Chinese screening script on Spark\n3. Filter out non-sentence lines and toxic document\n4. Remove duplicated text\n5. Remove documents that are over self-repeating - Repetition Removal in DeepMind MassiveText## 許可資訊\n\n請尊循 Common Craw terms of use 的條款。\n\n- URL" ]
5c130506ee73805299b09175688d2e6eabb0969a
# Aggressive Behavior Video Classification ## WARNING: People in the videos exhibit aggressive behavior The dataset with videos depicting people exhibiting **aggressive and non-aggressive behavior** is intended for classification purposes. It consists of a collection of video files that capture various individuals engaging in different activities and displaying distinct behavioral patterns and CSV-file with classification. **Aggressive Behavior Video Classification Dataset** can have multiple applications, such as surveillance systems, security modules, or social behavior analysis platforms. ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F4c8444fb8ddba04b0b0191d3517af3c6%2Ffreecompress-ezgif.gif?generation=1697023398942461&alt=media) # Get the dataset ### This is just an example of the data Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=aggressive-behavior-video-classification) to discuss your requirements, learn about the price and buy the dataset. # Dataset structure The dataset consists of: - **files**: folder with videos with people exhibiting aggressive and non-aggressive behaviour (subfolders "aggressive" and "non_aggressive" respectively), - **.csv file**: path of each video in the **"files"** folder and classification of the behavoir # People Behavior Video Classification might be made in accordance with your requirements. ## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=aggressive-behavior-video-classification)** provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
TrainingDataPro/aggressive-behavior-video-classification
[ "task_categories:video-classification", "language:en", "license:cc-by-nc-nd-4.0", "code", "legal", "region:us" ]
2023-10-11T12:50:17+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["video-classification"], "tags": ["code", "legal"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 422, "num_examples": 11}], "download_size": 1387, "dataset_size": 422}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-10T08:52:08+00:00
[]
[ "en" ]
TAGS #task_categories-video-classification #language-English #license-cc-by-nc-nd-4.0 #code #legal #region-us
# Aggressive Behavior Video Classification ## WARNING: People in the videos exhibit aggressive behavior The dataset with videos depicting people exhibiting aggressive and non-aggressive behavior is intended for classification purposes. It consists of a collection of video files that capture various individuals engaging in different activities and displaying distinct behavioral patterns and CSV-file with classification. Aggressive Behavior Video Classification Dataset can have multiple applications, such as surveillance systems, security modules, or social behavior analysis platforms. ![](URL # Get the dataset ### This is just an example of the data Leave a request on URL to discuss your requirements, learn about the price and buy the dataset. # Dataset structure The dataset consists of: - files: folder with videos with people exhibiting aggressive and non-aggressive behaviour (subfolders "aggressive" and "non_aggressive" respectively), - .csv file: path of each video in the "files" folder and classification of the behavoir # People Behavior Video Classification might be made in accordance with your requirements. ## TrainingData provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: URL TrainingData's GitHub: URL
[ "# Aggressive Behavior Video Classification", "## WARNING: People in the videos exhibit aggressive behavior\n\nThe dataset with videos depicting people exhibiting aggressive and non-aggressive behavior is intended for classification purposes. It consists of a collection of video files that capture various individuals engaging in different activities and displaying distinct behavioral patterns and CSV-file with classification.\n\nAggressive Behavior Video Classification Dataset can have multiple applications, such as surveillance systems, security modules, or social behavior analysis platforms.\n\n![](URL", "# Get the dataset", "### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.", "# Dataset structure\nThe dataset consists of:\n- files: folder with videos with people exhibiting aggressive and non-aggressive behaviour (subfolders \"aggressive\" and \"non_aggressive\" respectively),\n- .csv file: path of each video in the \"files\" folder and classification of the behavoir", "# People Behavior Video Classification might be made in accordance with your requirements.", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL" ]
[ "TAGS\n#task_categories-video-classification #language-English #license-cc-by-nc-nd-4.0 #code #legal #region-us \n", "# Aggressive Behavior Video Classification", "## WARNING: People in the videos exhibit aggressive behavior\n\nThe dataset with videos depicting people exhibiting aggressive and non-aggressive behavior is intended for classification purposes. It consists of a collection of video files that capture various individuals engaging in different activities and displaying distinct behavioral patterns and CSV-file with classification.\n\nAggressive Behavior Video Classification Dataset can have multiple applications, such as surveillance systems, security modules, or social behavior analysis platforms.\n\n![](URL", "# Get the dataset", "### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.", "# Dataset structure\nThe dataset consists of:\n- files: folder with videos with people exhibiting aggressive and non-aggressive behaviour (subfolders \"aggressive\" and \"non_aggressive\" respectively),\n- .csv file: path of each video in the \"files\" folder and classification of the behavoir", "# People Behavior Video Classification might be made in accordance with your requirements.", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL" ]
[ 38, 10, 117, 5, 30, 75, 17, 39 ]
[ "passage: TAGS\n#task_categories-video-classification #language-English #license-cc-by-nc-nd-4.0 #code #legal #region-us \n# Aggressive Behavior Video Classification## WARNING: People in the videos exhibit aggressive behavior\n\nThe dataset with videos depicting people exhibiting aggressive and non-aggressive behavior is intended for classification purposes. It consists of a collection of video files that capture various individuals engaging in different activities and displaying distinct behavioral patterns and CSV-file with classification.\n\nAggressive Behavior Video Classification Dataset can have multiple applications, such as surveillance systems, security modules, or social behavior analysis platforms.\n\n![](URL# Get the dataset### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.# Dataset structure\nThe dataset consists of:\n- files: folder with videos with people exhibiting aggressive and non-aggressive behaviour (subfolders \"aggressive\" and \"non_aggressive\" respectively),\n- .csv file: path of each video in the \"files\" folder and classification of the behavoir# People Behavior Video Classification might be made in accordance with your requirements.## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL" ]
d1dfdff508ab48e11acd8201a272394fcd87fc6f
# Dataset Card for "marketing_emails" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adamtappis/marketing_emails
[ "region:us" ]
2023-10-11T12:58:52+00:00
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20404, "num_examples": 10}], "download_size": 24797, "dataset_size": 20404}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T12:58:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "marketing_emails" More Information needed
[ "# Dataset Card for \"marketing_emails\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"marketing_emails\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"marketing_emails\"\n\nMore Information needed" ]
505ec4ad4f3872c130dbc01ed18683b3be860f22
# Dataset Card for "market_mail_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nehasingh555/market_mail_data
[ "region:us" ]
2023-10-11T13:19:09+00:00
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18382, "num_examples": 10}], "download_size": 24632, "dataset_size": 18382}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T13:19:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "market_mail_data" More Information needed
[ "# Dataset Card for \"market_mail_data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"market_mail_data\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"market_mail_data\"\n\nMore Information needed" ]
c7c0ead198f149fb17f87f32036d351d3d465b4e
# Dataset Card for "breast-cancer-QAs-llama" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
grasool/breast-cancer-QAs-llama
[ "region:us" ]
2023-10-11T13:39:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 104168, "num_examples": 298}, {"name": "test", "num_bytes": 11934, "num_examples": 34}], "download_size": 65852, "dataset_size": 116102}}
2023-10-11T15:17:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "breast-cancer-QAs-llama" More Information needed
[ "# Dataset Card for \"breast-cancer-QAs-llama\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"breast-cancer-QAs-llama\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"breast-cancer-QAs-llama\"\n\nMore Information needed" ]
8ea08e54eabdd0634c17ef4aa592ba11ccf329d8
# Dataset Card for "agile_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nalmeida/agile_dataset
[ "region:us" ]
2023-10-11T13:44:50+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2950354, "num_examples": 25990}], "download_size": 613065, "dataset_size": 2950354}}
2023-10-11T13:44:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "agile_dataset" More Information needed
[ "# Dataset Card for \"agile_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"agile_dataset\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"agile_dataset\"\n\nMore Information needed" ]
ee2d9ca9569976972a89fa25abf4068abab81e34
# Dataset Card for Evaluation run of AA051610/T1B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/AA051610/T1B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [AA051610/T1B](https://huggingface.co/AA051610/T1B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_AA051610__T1B", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-10-11T14:47:52.551958](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__T1B/blob/main/results_2023-10-11T14-47-52.551958.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5990888068369461, "acc_stderr": 0.03431005125414193, "acc_norm": 0.6027895245963486, "acc_norm_stderr": 0.03429409520845181, "mc1": 0.3268053855569155, "mc1_stderr": 0.01641987473113503, "mc2": 0.4701781470729953, "mc2_stderr": 0.014777434418052576 }, "harness|arc:challenge|25": { "acc": 0.5290102389078498, "acc_stderr": 0.014586776355294321, "acc_norm": 0.5614334470989761, "acc_norm_stderr": 0.014500682618212864 }, "harness|hellaswag|10": { "acc": 0.611929894443338, "acc_stderr": 0.004863147544177516, "acc_norm": 0.7978490340569607, "acc_norm_stderr": 0.004007834585541846 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.04688261722621505, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.562962962962963, "acc_stderr": 0.04284958639753401, "acc_norm": 0.562962962962963, "acc_norm_stderr": 0.04284958639753401 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6973684210526315, "acc_stderr": 0.03738520676119668, "acc_norm": 0.6973684210526315, "acc_norm_stderr": 0.03738520676119668 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6528301886792452, "acc_stderr": 0.029300101705549652, "acc_norm": 0.6528301886792452, "acc_norm_stderr": 0.029300101705549652 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7083333333333334, "acc_stderr": 0.038009680605548594, "acc_norm": 0.7083333333333334, "acc_norm_stderr": 0.038009680605548594 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.53, "acc_stderr": 0.050161355804659205, "acc_norm": 0.53, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6184971098265896, "acc_stderr": 0.03703851193099522, "acc_norm": 0.6184971098265896, "acc_norm_stderr": 0.03703851193099522 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.43829787234042555, "acc_stderr": 0.03243618636108101, "acc_norm": 0.43829787234042555, "acc_norm_stderr": 0.03243618636108101 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4473684210526316, "acc_stderr": 0.04677473004491199, "acc_norm": 0.4473684210526316, "acc_norm_stderr": 0.04677473004491199 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482757, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482757 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4947089947089947, "acc_stderr": 0.02574986828855657, "acc_norm": 0.4947089947089947, "acc_norm_stderr": 0.02574986828855657 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3968253968253968, "acc_stderr": 0.0437588849272706, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.0437588849272706 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7032258064516129, "acc_stderr": 0.025988500792411905, "acc_norm": 0.7032258064516129, "acc_norm_stderr": 0.025988500792411905 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.541871921182266, "acc_stderr": 0.03505630140785741, "acc_norm": 0.541871921182266, "acc_norm_stderr": 0.03505630140785741 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.64, "acc_stderr": 0.04824181513244218, "acc_norm": 0.64, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.4909090909090909, "acc_stderr": 0.039036986477484395, "acc_norm": 0.4909090909090909, "acc_norm_stderr": 0.039036986477484395 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.797979797979798, "acc_stderr": 0.02860620428922987, "acc_norm": 0.797979797979798, "acc_norm_stderr": 0.02860620428922987 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7927461139896373, "acc_stderr": 0.02925282329180363, "acc_norm": 0.7927461139896373, "acc_norm_stderr": 0.02925282329180363 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6076923076923076, "acc_stderr": 0.02475600038213095, "acc_norm": 0.6076923076923076, "acc_norm_stderr": 0.02475600038213095 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.02931820364520686, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.02931820364520686 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6680672268907563, "acc_stderr": 0.03058869701378364, "acc_norm": 0.6680672268907563, "acc_norm_stderr": 0.03058869701378364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.423841059602649, "acc_stderr": 0.040348466786033974, "acc_norm": 0.423841059602649, "acc_norm_stderr": 0.040348466786033974 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7834862385321101, "acc_stderr": 0.01765871059444313, "acc_norm": 0.7834862385321101, "acc_norm_stderr": 0.01765871059444313 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5046296296296297, "acc_stderr": 0.03409825519163572, "acc_norm": 0.5046296296296297, "acc_norm_stderr": 0.03409825519163572 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6176470588235294, "acc_stderr": 0.03410785338904719, "acc_norm": 0.6176470588235294, "acc_norm_stderr": 0.03410785338904719 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7637130801687764, "acc_stderr": 0.02765215314415927, "acc_norm": 0.7637130801687764, "acc_norm_stderr": 0.02765215314415927 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5874439461883408, "acc_stderr": 0.03304062175449297, "acc_norm": 0.5874439461883408, "acc_norm_stderr": 0.03304062175449297 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6946564885496184, "acc_stderr": 0.0403931497872456, "acc_norm": 0.6946564885496184, "acc_norm_stderr": 0.0403931497872456 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6388888888888888, "acc_stderr": 0.04643454608906275, "acc_norm": 0.6388888888888888, "acc_norm_stderr": 0.04643454608906275 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6748466257668712, "acc_stderr": 0.03680350371286461, "acc_norm": 0.6748466257668712, "acc_norm_stderr": 0.03680350371286461 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4017857142857143, "acc_stderr": 0.04653333146973646, "acc_norm": 0.4017857142857143, "acc_norm_stderr": 0.04653333146973646 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8076923076923077, "acc_stderr": 0.02581923325648372, "acc_norm": 0.8076923076923077, "acc_norm_stderr": 0.02581923325648372 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7496807151979565, "acc_stderr": 0.015491088951494583, "acc_norm": 0.7496807151979565, "acc_norm_stderr": 0.015491088951494583 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6271676300578035, "acc_stderr": 0.026033890613576277, "acc_norm": 0.6271676300578035, "acc_norm_stderr": 0.026033890613576277 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.38212290502793295, "acc_stderr": 0.016251139711570765, "acc_norm": 0.38212290502793295, "acc_norm_stderr": 0.016251139711570765 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6928104575163399, "acc_stderr": 0.026415601914388992, "acc_norm": 0.6928104575163399, "acc_norm_stderr": 0.026415601914388992 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6527331189710611, "acc_stderr": 0.027040745502307336, "acc_norm": 0.6527331189710611, "acc_norm_stderr": 0.027040745502307336 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6388888888888888, "acc_stderr": 0.026725868809100793, "acc_norm": 0.6388888888888888, "acc_norm_stderr": 0.026725868809100793 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46099290780141844, "acc_stderr": 0.02973659252642444, "acc_norm": 0.46099290780141844, "acc_norm_stderr": 0.02973659252642444 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.423728813559322, "acc_stderr": 0.012620785155885996, "acc_norm": 0.423728813559322, "acc_norm_stderr": 0.012620785155885996 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6286764705882353, "acc_stderr": 0.02934980313976587, "acc_norm": 0.6286764705882353, "acc_norm_stderr": 0.02934980313976587 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6062091503267973, "acc_stderr": 0.019766211991073056, "acc_norm": 0.6062091503267973, "acc_norm_stderr": 0.019766211991073056 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.04673752333670239, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.04673752333670239 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6857142857142857, "acc_stderr": 0.02971932942241748, "acc_norm": 0.6857142857142857, "acc_norm_stderr": 0.02971932942241748 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7810945273631841, "acc_stderr": 0.029239174636647, "acc_norm": 0.7810945273631841, "acc_norm_stderr": 0.029239174636647 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.82, "acc_stderr": 0.038612291966536934, "acc_norm": 0.82, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-virology|5": { "acc": 0.4578313253012048, "acc_stderr": 0.0387862677100236, "acc_norm": 0.4578313253012048, "acc_norm_stderr": 0.0387862677100236 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.783625730994152, "acc_stderr": 0.03158149539338734, "acc_norm": 0.783625730994152, "acc_norm_stderr": 0.03158149539338734 }, "harness|truthfulqa:mc|0": { "mc1": 0.3268053855569155, "mc1_stderr": 0.01641987473113503, "mc2": 0.4701781470729953, "mc2_stderr": 0.014777434418052576 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_AA051610__T1B
[ "region:us" ]
2023-10-11T13:48:15+00:00
{"pretty_name": "Evaluation run of AA051610/T1B", "dataset_summary": "Dataset automatically created during the evaluation run of model [AA051610/T1B](https://huggingface.co/AA051610/T1B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051610__T1B\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T14:47:52.551958](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__T1B/blob/main/results_2023-10-11T14-47-52.551958.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5990888068369461,\n \"acc_stderr\": 0.03431005125414193,\n \"acc_norm\": 0.6027895245963486,\n \"acc_norm_stderr\": 0.03429409520845181,\n \"mc1\": 0.3268053855569155,\n \"mc1_stderr\": 0.01641987473113503,\n \"mc2\": 0.4701781470729953,\n \"mc2_stderr\": 0.014777434418052576\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5290102389078498,\n \"acc_stderr\": 0.014586776355294321,\n \"acc_norm\": 0.5614334470989761,\n \"acc_norm_stderr\": 0.014500682618212864\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.611929894443338,\n \"acc_stderr\": 0.004863147544177516,\n \"acc_norm\": 0.7978490340569607,\n \"acc_norm_stderr\": 0.004007834585541846\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119668,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119668\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6528301886792452,\n \"acc_stderr\": 0.029300101705549652,\n \"acc_norm\": 0.6528301886792452,\n \"acc_norm_stderr\": 0.029300101705549652\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099522,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099522\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.43829787234042555,\n \"acc_stderr\": 0.03243618636108101,\n \"acc_norm\": 0.43829787234042555,\n \"acc_norm_stderr\": 0.03243618636108101\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.0437588849272706,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.0437588849272706\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7032258064516129,\n \"acc_stderr\": 0.025988500792411905,\n \"acc_norm\": 0.7032258064516129,\n \"acc_norm_stderr\": 0.025988500792411905\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.4909090909090909,\n \"acc_stderr\": 0.039036986477484395,\n \"acc_norm\": 0.4909090909090909,\n \"acc_norm_stderr\": 0.039036986477484395\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7927461139896373,\n \"acc_stderr\": 0.02925282329180363,\n \"acc_norm\": 0.7927461139896373,\n \"acc_norm_stderr\": 0.02925282329180363\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.02475600038213095,\n \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.02475600038213095\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.02931820364520686,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.02931820364520686\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.423841059602649,\n \"acc_stderr\": 0.040348466786033974,\n \"acc_norm\": 0.423841059602649,\n \"acc_norm_stderr\": 0.040348466786033974\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7834862385321101,\n \"acc_stderr\": 0.01765871059444313,\n \"acc_norm\": 0.7834862385321101,\n \"acc_norm_stderr\": 0.01765871059444313\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.03410785338904719,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.03410785338904719\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.02765215314415927,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.02765215314415927\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5874439461883408,\n \"acc_stderr\": 0.03304062175449297,\n \"acc_norm\": 0.5874439461883408,\n \"acc_norm_stderr\": 0.03304062175449297\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.0403931497872456,\n \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.0403931497872456\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.04643454608906275,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.04643454608906275\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6748466257668712,\n \"acc_stderr\": 0.03680350371286461,\n \"acc_norm\": 0.6748466257668712,\n \"acc_norm_stderr\": 0.03680350371286461\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8076923076923077,\n \"acc_stderr\": 0.02581923325648372,\n \"acc_norm\": 0.8076923076923077,\n \"acc_norm_stderr\": 0.02581923325648372\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7496807151979565,\n \"acc_stderr\": 0.015491088951494583,\n \"acc_norm\": 0.7496807151979565,\n \"acc_norm_stderr\": 0.015491088951494583\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6271676300578035,\n \"acc_stderr\": 0.026033890613576277,\n \"acc_norm\": 0.6271676300578035,\n \"acc_norm_stderr\": 0.026033890613576277\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38212290502793295,\n \"acc_stderr\": 0.016251139711570765,\n \"acc_norm\": 0.38212290502793295,\n \"acc_norm_stderr\": 0.016251139711570765\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6928104575163399,\n \"acc_stderr\": 0.026415601914388992,\n \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.026415601914388992\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.026725868809100793,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.026725868809100793\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.423728813559322,\n \"acc_stderr\": 0.012620785155885996,\n \"acc_norm\": 0.423728813559322,\n \"acc_norm_stderr\": 0.012620785155885996\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6286764705882353,\n \"acc_stderr\": 0.02934980313976587,\n \"acc_norm\": 0.6286764705882353,\n \"acc_norm_stderr\": 0.02934980313976587\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6062091503267973,\n \"acc_stderr\": 0.019766211991073056,\n \"acc_norm\": 0.6062091503267973,\n \"acc_norm_stderr\": 0.019766211991073056\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n \"acc_stderr\": 0.04673752333670239,\n \"acc_norm\": 0.6090909090909091,\n \"acc_norm_stderr\": 0.04673752333670239\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6857142857142857,\n \"acc_stderr\": 0.02971932942241748,\n \"acc_norm\": 0.6857142857142857,\n \"acc_norm_stderr\": 0.02971932942241748\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4578313253012048,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.4578313253012048,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338734,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338734\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3268053855569155,\n \"mc1_stderr\": 0.01641987473113503,\n \"mc2\": 0.4701781470729953,\n \"mc2_stderr\": 0.014777434418052576\n }\n}\n```", "repo_url": "https://huggingface.co/AA051610/T1B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|arc:challenge|25_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hellaswag|10_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T14-47-52.551958.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T14-47-52.551958.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T14_47_52.551958", "path": ["results_2023-10-11T14-47-52.551958.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T14-47-52.551958.parquet"]}]}]}
2023-10-11T13:49:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of AA051610/T1B ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model AA051610/T1B on the Open LLM Leaderboard. The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-11T14:47:52.551958(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of AA051610/T1B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T14:47:52.551958(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of AA051610/T1B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T14:47:52.551958(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 17, 31, 165, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AA051610/T1B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T14:47:52.551958(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
d35dc5fd7a91948b97d2aecee9bccfde79b8bc41
# Dataset Card for "6bf53b4b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/6bf53b4b
[ "region:us" ]
2023-10-11T13:51:08+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 162, "num_examples": 10}], "download_size": 1350, "dataset_size": 162}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T13:51:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "6bf53b4b" More Information needed
[ "# Dataset Card for \"6bf53b4b\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"6bf53b4b\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"6bf53b4b\"\n\nMore Information needed" ]
878e847cb137ef96edd7c08cf78a9dc1857b08e1
# Dataset Card for "musdb" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
danjacobellis/musdb
[ "region:us" ]
2023-10-11T14:03:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "mixture", "dtype": {"audio": {"sampling_rate": 44100, "mono": false}}}, {"name": "drums", "dtype": {"audio": {"sampling_rate": 44100, "mono": false}}}, {"name": "bass", "dtype": {"audio": {"sampling_rate": 44100, "mono": false}}}, {"name": "other", "dtype": {"audio": {"sampling_rate": 44100, "mono": false}}}, {"name": "vocals", "dtype": {"audio": {"sampling_rate": 44100, "mono": false}}}], "splits": [{"name": "test", "num_bytes": 6534850857.0, "num_examples": 15}], "download_size": 4928706537, "dataset_size": 6534850857.0}}
2023-10-11T15:01:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "musdb" More Information needed
[ "# Dataset Card for \"musdb\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"musdb\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"musdb\"\n\nMore Information needed" ]
bd8e8bbf325b79243f19358a94cf6524cd406df0
# Dataset Card for "MarketingMail" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ramchiluveru/MarketingMail
[ "region:us" ]
2023-10-11T14:03:10+00:00
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19321, "num_examples": 10}], "download_size": 25230, "dataset_size": 19321}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T14:03:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "MarketingMail" More Information needed
[ "# Dataset Card for \"MarketingMail\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"MarketingMail\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"MarketingMail\"\n\nMore Information needed" ]
d1cca3a48e5d76d0507a89e30c9a606e0f038709
# Dataset Card for "RamCMarketingMail" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ramchiluveru/RamCMarketingMail
[ "region:us" ]
2023-10-11T14:03:23+00:00
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19321, "num_examples": 10}], "download_size": 25230, "dataset_size": 19321}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T14:03:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "RamCMarketingMail" More Information needed
[ "# Dataset Card for \"RamCMarketingMail\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"RamCMarketingMail\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"RamCMarketingMail\"\n\nMore Information needed" ]
451e22aedb88b3a2d44cda002d19732e0102e303
# Dataset Card for "spotlight-osunlp-MagicBrush-enrichment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
renumics/spotlight-osunlp-MagicBrush-enrichment
[ "region:us" ]
2023-10-11T14:04:49+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "img_id.embedding", "sequence": "float32", "length": 2}, {"name": "source_img.embedding", "sequence": "float32", "length": 2}, {"name": "mask_img.embedding", "sequence": "float32", "length": 2}, {"name": "instruction.embedding", "sequence": "float32", "length": 2}, {"name": "target_img.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 352280, "num_examples": 8807}, {"name": "dev", "num_bytes": 21120, "num_examples": 528}], "download_size": 524053, "dataset_size": 373400}}
2023-10-11T14:04:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spotlight-osunlp-MagicBrush-enrichment" More Information needed
[ "# Dataset Card for \"spotlight-osunlp-MagicBrush-enrichment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spotlight-osunlp-MagicBrush-enrichment\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-osunlp-MagicBrush-enrichment\"\n\nMore Information needed" ]
dffe5bb5b2eca8d20e615e198ec440cf27cb0980
# Dataset Card for Evaluation run of AA051610/T2A ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/AA051610/T2A - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [AA051610/T2A](https://huggingface.co/AA051610/T2A) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_AA051610__T2A", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-10-11T15:16:23.487044](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__T2A/blob/main/results_2023-10-11T15-16-23.487044.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6173946376864352, "acc_stderr": 0.03337871209335492, "acc_norm": 0.6210628259045844, "acc_norm_stderr": 0.03336880945880684, "mc1": 0.31334149326805383, "mc1_stderr": 0.0162380650690596, "mc2": 0.47014420938426915, "mc2_stderr": 0.014571966148559557 }, "harness|arc:challenge|25": { "acc": 0.4854948805460751, "acc_stderr": 0.014605241081370053, "acc_norm": 0.514505119453925, "acc_norm_stderr": 0.014605241081370056 }, "harness|hellaswag|10": { "acc": 0.5524795857398924, "acc_stderr": 0.0049622205125483525, "acc_norm": 0.739892451702848, "acc_norm_stderr": 0.004377965074211627 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.22, "acc_stderr": 0.041633319989322695, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322695 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5555555555555556, "acc_stderr": 0.04292596718256981, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.04292596718256981 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.038781398887976104, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.038781398887976104 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118634, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118634 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6944444444444444, "acc_stderr": 0.03852084696008534, "acc_norm": 0.6944444444444444, "acc_norm_stderr": 0.03852084696008534 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.048108401480826346, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.048108401480826346 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146268, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146268 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.39473684210526316, "acc_stderr": 0.045981880578165414, "acc_norm": 0.39473684210526316, "acc_norm_stderr": 0.045981880578165414 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5241379310344828, "acc_stderr": 0.0416180850350153, "acc_norm": 0.5241379310344828, "acc_norm_stderr": 0.0416180850350153 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4444444444444444, "acc_stderr": 0.025591857761382175, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.025591857761382175 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.48412698412698413, "acc_stderr": 0.04469881854072606, "acc_norm": 0.48412698412698413, "acc_norm_stderr": 0.04469881854072606 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7516129032258064, "acc_stderr": 0.024580028921481003, "acc_norm": 0.7516129032258064, "acc_norm_stderr": 0.024580028921481003 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.541871921182266, "acc_stderr": 0.03505630140785741, "acc_norm": 0.541871921182266, "acc_norm_stderr": 0.03505630140785741 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7272727272727273, "acc_stderr": 0.0347769116216366, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.0347769116216366 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.02886977846026704, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.02886977846026704 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8911917098445595, "acc_stderr": 0.02247325333276877, "acc_norm": 0.8911917098445595, "acc_norm_stderr": 0.02247325333276877 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6051282051282051, "acc_stderr": 0.02478431694215639, "acc_norm": 0.6051282051282051, "acc_norm_stderr": 0.02478431694215639 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32222222222222224, "acc_stderr": 0.028493465091028597, "acc_norm": 0.32222222222222224, "acc_norm_stderr": 0.028493465091028597 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6428571428571429, "acc_stderr": 0.031124619309328177, "acc_norm": 0.6428571428571429, "acc_norm_stderr": 0.031124619309328177 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.37748344370860926, "acc_stderr": 0.0395802723112157, "acc_norm": 0.37748344370860926, "acc_norm_stderr": 0.0395802723112157 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8275229357798165, "acc_stderr": 0.016197807956848036, "acc_norm": 0.8275229357798165, "acc_norm_stderr": 0.016197807956848036 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5046296296296297, "acc_stderr": 0.03409825519163572, "acc_norm": 0.5046296296296297, "acc_norm_stderr": 0.03409825519163572 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.0286265479124374, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.0286265479124374 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.026750826994676177, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.026750826994676177 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.031024411740572203, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.031024411740572203 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6717557251908397, "acc_stderr": 0.04118438565806298, "acc_norm": 0.6717557251908397, "acc_norm_stderr": 0.04118438565806298 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7407407407407407, "acc_stderr": 0.042365112580946315, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.042365112580946315 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7361963190184049, "acc_stderr": 0.03462419931615623, "acc_norm": 0.7361963190184049, "acc_norm_stderr": 0.03462419931615623 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.04726835553719099, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.04726835553719099 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.042450224863844956, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.042450224863844956 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.023086635086841403, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.023086635086841403 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8058748403575989, "acc_stderr": 0.01414397027665757, "acc_norm": 0.8058748403575989, "acc_norm_stderr": 0.01414397027665757 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6676300578034682, "acc_stderr": 0.025361168749688218, "acc_norm": 0.6676300578034682, "acc_norm_stderr": 0.025361168749688218 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.30837988826815643, "acc_stderr": 0.015445716910998874, "acc_norm": 0.30837988826815643, "acc_norm_stderr": 0.015445716910998874 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7254901960784313, "acc_stderr": 0.025553169991826514, "acc_norm": 0.7254901960784313, "acc_norm_stderr": 0.025553169991826514 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.025583062489984824, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.025583062489984824 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6820987654320988, "acc_stderr": 0.02591006352824088, "acc_norm": 0.6820987654320988, "acc_norm_stderr": 0.02591006352824088 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4574468085106383, "acc_stderr": 0.029719281272236837, "acc_norm": 0.4574468085106383, "acc_norm_stderr": 0.029719281272236837 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.49282920469361147, "acc_stderr": 0.012768922739553313, "acc_norm": 0.49282920469361147, "acc_norm_stderr": 0.012768922739553313 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6139705882352942, "acc_stderr": 0.029573269134411124, "acc_norm": 0.6139705882352942, "acc_norm_stderr": 0.029573269134411124 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6209150326797386, "acc_stderr": 0.019627444748412243, "acc_norm": 0.6209150326797386, "acc_norm_stderr": 0.019627444748412243 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6454545454545455, "acc_stderr": 0.04582004841505416, "acc_norm": 0.6454545454545455, "acc_norm_stderr": 0.04582004841505416 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6979591836734694, "acc_stderr": 0.0293936093198798, "acc_norm": 0.6979591836734694, "acc_norm_stderr": 0.0293936093198798 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8159203980099502, "acc_stderr": 0.027403859410786848, "acc_norm": 0.8159203980099502, "acc_norm_stderr": 0.027403859410786848 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.03487350880197768, "acc_norm": 0.86, "acc_norm_stderr": 0.03487350880197768 }, "harness|hendrycksTest-virology|5": { "acc": 0.4879518072289157, "acc_stderr": 0.03891364495835821, "acc_norm": 0.4879518072289157, "acc_norm_stderr": 0.03891364495835821 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7894736842105263, "acc_stderr": 0.0312678171466318, "acc_norm": 0.7894736842105263, "acc_norm_stderr": 0.0312678171466318 }, "harness|truthfulqa:mc|0": { "mc1": 0.31334149326805383, "mc1_stderr": 0.0162380650690596, "mc2": 0.47014420938426915, "mc2_stderr": 0.014571966148559557 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_AA051610__T2A
[ "region:us" ]
2023-10-11T14:16:46+00:00
{"pretty_name": "Evaluation run of AA051610/T2A", "dataset_summary": "Dataset automatically created during the evaluation run of model [AA051610/T2A](https://huggingface.co/AA051610/T2A) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051610__T2A\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T15:16:23.487044](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__T2A/blob/main/results_2023-10-11T15-16-23.487044.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6173946376864352,\n \"acc_stderr\": 0.03337871209335492,\n \"acc_norm\": 0.6210628259045844,\n \"acc_norm_stderr\": 0.03336880945880684,\n \"mc1\": 0.31334149326805383,\n \"mc1_stderr\": 0.0162380650690596,\n \"mc2\": 0.47014420938426915,\n \"mc2_stderr\": 0.014571966148559557\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4854948805460751,\n \"acc_stderr\": 0.014605241081370053,\n \"acc_norm\": 0.514505119453925,\n \"acc_norm_stderr\": 0.014605241081370056\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5524795857398924,\n \"acc_stderr\": 0.0049622205125483525,\n \"acc_norm\": 0.739892451702848,\n \"acc_norm_stderr\": 0.004377965074211627\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.038781398887976104,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.038781398887976104\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118634,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118634\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.39473684210526316,\n \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.39473684210526316,\n \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.025591857761382175,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.025591857761382175\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7516129032258064,\n \"acc_stderr\": 0.024580028921481003,\n \"acc_norm\": 0.7516129032258064,\n \"acc_norm_stderr\": 0.024580028921481003\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026704,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026704\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.02247325333276877,\n \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.02247325333276877\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6051282051282051,\n \"acc_stderr\": 0.02478431694215639,\n \"acc_norm\": 0.6051282051282051,\n \"acc_norm_stderr\": 0.02478431694215639\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028597,\n \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028597\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8275229357798165,\n \"acc_stderr\": 0.016197807956848036,\n \"acc_norm\": 0.8275229357798165,\n \"acc_norm_stderr\": 0.016197807956848036\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.0286265479124374,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.0286265479124374\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676177,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676177\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.031024411740572203,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.031024411740572203\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6717557251908397,\n \"acc_stderr\": 0.04118438565806298,\n \"acc_norm\": 0.6717557251908397,\n \"acc_norm_stderr\": 0.04118438565806298\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.042450224863844956,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.042450224863844956\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841403,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841403\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8058748403575989,\n \"acc_stderr\": 0.01414397027665757,\n \"acc_norm\": 0.8058748403575989,\n \"acc_norm_stderr\": 0.01414397027665757\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6676300578034682,\n \"acc_stderr\": 0.025361168749688218,\n \"acc_norm\": 0.6676300578034682,\n \"acc_norm_stderr\": 0.025361168749688218\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30837988826815643,\n \"acc_stderr\": 0.015445716910998874,\n \"acc_norm\": 0.30837988826815643,\n \"acc_norm_stderr\": 0.015445716910998874\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826514,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826514\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.025583062489984824,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.025583062489984824\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6820987654320988,\n \"acc_stderr\": 0.02591006352824088,\n \"acc_norm\": 0.6820987654320988,\n \"acc_norm_stderr\": 0.02591006352824088\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236837,\n \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236837\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.49282920469361147,\n \"acc_stderr\": 0.012768922739553313,\n \"acc_norm\": 0.49282920469361147,\n \"acc_norm_stderr\": 0.012768922739553313\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6139705882352942,\n \"acc_stderr\": 0.029573269134411124,\n \"acc_norm\": 0.6139705882352942,\n \"acc_norm_stderr\": 0.029573269134411124\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6209150326797386,\n \"acc_stderr\": 0.019627444748412243,\n \"acc_norm\": 0.6209150326797386,\n \"acc_norm_stderr\": 0.019627444748412243\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.04582004841505416,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.04582004841505416\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.0293936093198798,\n \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.0293936093198798\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n \"acc_stderr\": 0.027403859410786848,\n \"acc_norm\": 0.8159203980099502,\n \"acc_norm_stderr\": 0.027403859410786848\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197768,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197768\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.0312678171466318,\n \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.0312678171466318\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31334149326805383,\n \"mc1_stderr\": 0.0162380650690596,\n \"mc2\": 0.47014420938426915,\n \"mc2_stderr\": 0.014571966148559557\n }\n}\n```", "repo_url": "https://huggingface.co/AA051610/T2A", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|arc:challenge|25_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hellaswag|10_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T15-16-23.487044.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T15-16-23.487044.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T15_16_23.487044", "path": ["results_2023-10-11T15-16-23.487044.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T15-16-23.487044.parquet"]}]}]}
2023-10-11T14:18:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of AA051610/T2A ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model AA051610/T2A on the Open LLM Leaderboard. The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-11T15:16:23.487044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of AA051610/T2A", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T2A on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T15:16:23.487044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of AA051610/T2A", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T2A on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T15:16:23.487044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 17, 31, 165, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AA051610/T2A## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T2A on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T15:16:23.487044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
de6ac87f8cba8c50055771681759f4a6d70585ec
# Dataset Card for Evaluation run of AA051610/T1C ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/AA051610/T1C - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [AA051610/T1C](https://huggingface.co/AA051610/T1C) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_AA051610__T1C", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-10-11T15:21:34.954726](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__T1C/blob/main/results_2023-10-11T15-21-34.954726.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5614045523007456, "acc_stderr": 0.034472805150990236, "acc_norm": 0.5650409022375938, "acc_norm_stderr": 0.03446466967324352, "mc1": 0.2913096695226438, "mc1_stderr": 0.015905987048184828, "mc2": 0.42517178573631115, "mc2_stderr": 0.01461529390566251 }, "harness|arc:challenge|25": { "acc": 0.4709897610921502, "acc_stderr": 0.014586776355294316, "acc_norm": 0.5017064846416383, "acc_norm_stderr": 0.01461130570505699 }, "harness|hellaswag|10": { "acc": 0.5382393945429197, "acc_stderr": 0.004975167382061832, "acc_norm": 0.7220673172674766, "acc_norm_stderr": 0.004470644845242893 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5037037037037037, "acc_stderr": 0.043192236258113324, "acc_norm": 0.5037037037037037, "acc_norm_stderr": 0.043192236258113324 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5789473684210527, "acc_stderr": 0.04017901275981748, "acc_norm": 0.5789473684210527, "acc_norm_stderr": 0.04017901275981748 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.65, "acc_stderr": 0.04793724854411019, "acc_norm": 0.65, "acc_norm_stderr": 0.04793724854411019 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6415094339622641, "acc_stderr": 0.02951470358398177, "acc_norm": 0.6415094339622641, "acc_norm_stderr": 0.02951470358398177 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6111111111111112, "acc_stderr": 0.04076663253918567, "acc_norm": 0.6111111111111112, "acc_norm_stderr": 0.04076663253918567 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6184971098265896, "acc_stderr": 0.03703851193099521, "acc_norm": 0.6184971098265896, "acc_norm_stderr": 0.03703851193099521 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.30392156862745096, "acc_stderr": 0.04576665403207762, "acc_norm": 0.30392156862745096, "acc_norm_stderr": 0.04576665403207762 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.502127659574468, "acc_stderr": 0.032685726586674915, "acc_norm": 0.502127659574468, "acc_norm_stderr": 0.032685726586674915 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4473684210526316, "acc_stderr": 0.04677473004491199, "acc_norm": 0.4473684210526316, "acc_norm_stderr": 0.04677473004491199 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.47586206896551725, "acc_stderr": 0.0416180850350153, "acc_norm": 0.47586206896551725, "acc_norm_stderr": 0.0416180850350153 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3412698412698413, "acc_stderr": 0.024419234966819067, "acc_norm": 0.3412698412698413, "acc_norm_stderr": 0.024419234966819067 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.35714285714285715, "acc_stderr": 0.04285714285714281, "acc_norm": 0.35714285714285715, "acc_norm_stderr": 0.04285714285714281 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6774193548387096, "acc_stderr": 0.026593084516572284, "acc_norm": 0.6774193548387096, "acc_norm_stderr": 0.026593084516572284 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4187192118226601, "acc_stderr": 0.03471192860518468, "acc_norm": 0.4187192118226601, "acc_norm_stderr": 0.03471192860518468 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.2606060606060606, "acc_stderr": 0.03427743175816524, "acc_norm": 0.2606060606060606, "acc_norm_stderr": 0.03427743175816524 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7373737373737373, "acc_stderr": 0.03135305009533084, "acc_norm": 0.7373737373737373, "acc_norm_stderr": 0.03135305009533084 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7979274611398963, "acc_stderr": 0.02897908979429673, "acc_norm": 0.7979274611398963, "acc_norm_stderr": 0.02897908979429673 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5128205128205128, "acc_stderr": 0.025342671293807257, "acc_norm": 0.5128205128205128, "acc_norm_stderr": 0.025342671293807257 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26666666666666666, "acc_stderr": 0.02696242432507382, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.02696242432507382 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5462184873949579, "acc_stderr": 0.03233943468182088, "acc_norm": 0.5462184873949579, "acc_norm_stderr": 0.03233943468182088 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.038796870240733264, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.038796870240733264 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7541284403669725, "acc_stderr": 0.018461940968708443, "acc_norm": 0.7541284403669725, "acc_norm_stderr": 0.018461940968708443 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.41203703703703703, "acc_stderr": 0.03356787758160835, "acc_norm": 0.41203703703703703, "acc_norm_stderr": 0.03356787758160835 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6029411764705882, "acc_stderr": 0.03434131164719129, "acc_norm": 0.6029411764705882, "acc_norm_stderr": 0.03434131164719129 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7637130801687764, "acc_stderr": 0.02765215314415927, "acc_norm": 0.7637130801687764, "acc_norm_stderr": 0.02765215314415927 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.648854961832061, "acc_stderr": 0.04186445163013751, "acc_norm": 0.648854961832061, "acc_norm_stderr": 0.04186445163013751 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7520661157024794, "acc_stderr": 0.03941897526516302, "acc_norm": 0.7520661157024794, "acc_norm_stderr": 0.03941897526516302 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6944444444444444, "acc_stderr": 0.04453197507374984, "acc_norm": 0.6944444444444444, "acc_norm_stderr": 0.04453197507374984 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6441717791411042, "acc_stderr": 0.03761521380046734, "acc_norm": 0.6441717791411042, "acc_norm_stderr": 0.03761521380046734 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.44642857142857145, "acc_stderr": 0.047184714852195886, "acc_norm": 0.44642857142857145, "acc_norm_stderr": 0.047184714852195886 }, "harness|hendrycksTest-management|5": { "acc": 0.6990291262135923, "acc_stderr": 0.045416094465039476, "acc_norm": 0.6990291262135923, "acc_norm_stderr": 0.045416094465039476 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8290598290598291, "acc_stderr": 0.02466249684520982, "acc_norm": 0.8290598290598291, "acc_norm_stderr": 0.02466249684520982 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7624521072796935, "acc_stderr": 0.015218733046150193, "acc_norm": 0.7624521072796935, "acc_norm_stderr": 0.015218733046150193 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6069364161849711, "acc_stderr": 0.02629622791561367, "acc_norm": 0.6069364161849711, "acc_norm_stderr": 0.02629622791561367 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3027932960893855, "acc_stderr": 0.01536686038639711, "acc_norm": 0.3027932960893855, "acc_norm_stderr": 0.01536686038639711 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6339869281045751, "acc_stderr": 0.02758281141515961, "acc_norm": 0.6339869281045751, "acc_norm_stderr": 0.02758281141515961 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6559485530546624, "acc_stderr": 0.02698147804364804, "acc_norm": 0.6559485530546624, "acc_norm_stderr": 0.02698147804364804 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6080246913580247, "acc_stderr": 0.027163686038271146, "acc_norm": 0.6080246913580247, "acc_norm_stderr": 0.027163686038271146 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.41134751773049644, "acc_stderr": 0.02935491115994098, "acc_norm": 0.41134751773049644, "acc_norm_stderr": 0.02935491115994098 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4517601043024772, "acc_stderr": 0.012710662233660247, "acc_norm": 0.4517601043024772, "acc_norm_stderr": 0.012710662233660247 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5183823529411765, "acc_stderr": 0.030352303395351964, "acc_norm": 0.5183823529411765, "acc_norm_stderr": 0.030352303395351964 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5718954248366013, "acc_stderr": 0.0200176292142131, "acc_norm": 0.5718954248366013, "acc_norm_stderr": 0.0200176292142131 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.5818181818181818, "acc_stderr": 0.04724577405731572, "acc_norm": 0.5818181818181818, "acc_norm_stderr": 0.04724577405731572 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6326530612244898, "acc_stderr": 0.030862144921087548, "acc_norm": 0.6326530612244898, "acc_norm_stderr": 0.030862144921087548 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7810945273631841, "acc_stderr": 0.029239174636647, "acc_norm": 0.7810945273631841, "acc_norm_stderr": 0.029239174636647 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.78, "acc_stderr": 0.04163331998932261, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932261 }, "harness|hendrycksTest-virology|5": { "acc": 0.46987951807228917, "acc_stderr": 0.03885425420866766, "acc_norm": 0.46987951807228917, "acc_norm_stderr": 0.03885425420866766 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.783625730994152, "acc_stderr": 0.03158149539338734, "acc_norm": 0.783625730994152, "acc_norm_stderr": 0.03158149539338734 }, "harness|truthfulqa:mc|0": { "mc1": 0.2913096695226438, "mc1_stderr": 0.015905987048184828, "mc2": 0.42517178573631115, "mc2_stderr": 0.01461529390566251 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_AA051610__T1C
[ "region:us" ]
2023-10-11T14:21:56+00:00
{"pretty_name": "Evaluation run of AA051610/T1C", "dataset_summary": "Dataset automatically created during the evaluation run of model [AA051610/T1C](https://huggingface.co/AA051610/T1C) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051610__T1C\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T15:21:34.954726](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__T1C/blob/main/results_2023-10-11T15-21-34.954726.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5614045523007456,\n \"acc_stderr\": 0.034472805150990236,\n \"acc_norm\": 0.5650409022375938,\n \"acc_norm_stderr\": 0.03446466967324352,\n \"mc1\": 0.2913096695226438,\n \"mc1_stderr\": 0.015905987048184828,\n \"mc2\": 0.42517178573631115,\n \"mc2_stderr\": 0.01461529390566251\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4709897610921502,\n \"acc_stderr\": 0.014586776355294316,\n \"acc_norm\": 0.5017064846416383,\n \"acc_norm_stderr\": 0.01461130570505699\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5382393945429197,\n \"acc_stderr\": 0.004975167382061832,\n \"acc_norm\": 0.7220673172674766,\n \"acc_norm_stderr\": 0.004470644845242893\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n \"acc_stderr\": 0.043192236258113324,\n \"acc_norm\": 0.5037037037037037,\n \"acc_norm_stderr\": 0.043192236258113324\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5789473684210527,\n \"acc_stderr\": 0.04017901275981748,\n \"acc_norm\": 0.5789473684210527,\n \"acc_norm_stderr\": 0.04017901275981748\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6415094339622641,\n \"acc_stderr\": 0.02951470358398177,\n \"acc_norm\": 0.6415094339622641,\n \"acc_norm_stderr\": 0.02951470358398177\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.04076663253918567,\n \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.04076663253918567\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.04576665403207762,\n \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.04576665403207762\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.502127659574468,\n \"acc_stderr\": 0.032685726586674915,\n \"acc_norm\": 0.502127659574468,\n \"acc_norm_stderr\": 0.032685726586674915\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.47586206896551725,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.47586206896551725,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3412698412698413,\n \"acc_stderr\": 0.024419234966819067,\n \"acc_norm\": 0.3412698412698413,\n \"acc_norm_stderr\": 0.024419234966819067\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6774193548387096,\n \"acc_stderr\": 0.026593084516572284,\n \"acc_norm\": 0.6774193548387096,\n \"acc_norm_stderr\": 0.026593084516572284\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4187192118226601,\n \"acc_stderr\": 0.03471192860518468,\n \"acc_norm\": 0.4187192118226601,\n \"acc_norm_stderr\": 0.03471192860518468\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2606060606060606,\n \"acc_stderr\": 0.03427743175816524,\n \"acc_norm\": 0.2606060606060606,\n \"acc_norm_stderr\": 0.03427743175816524\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533084,\n \"acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533084\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.02897908979429673,\n \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.02897908979429673\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5128205128205128,\n \"acc_stderr\": 0.025342671293807257,\n \"acc_norm\": 0.5128205128205128,\n \"acc_norm_stderr\": 0.025342671293807257\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.02696242432507382,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.02696242432507382\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5462184873949579,\n \"acc_stderr\": 0.03233943468182088,\n \"acc_norm\": 0.5462184873949579,\n \"acc_norm_stderr\": 0.03233943468182088\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7541284403669725,\n \"acc_stderr\": 0.018461940968708443,\n \"acc_norm\": 0.7541284403669725,\n \"acc_norm_stderr\": 0.018461940968708443\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.41203703703703703,\n \"acc_stderr\": 0.03356787758160835,\n \"acc_norm\": 0.41203703703703703,\n \"acc_norm_stderr\": 0.03356787758160835\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6029411764705882,\n \"acc_stderr\": 0.03434131164719129,\n \"acc_norm\": 0.6029411764705882,\n \"acc_norm_stderr\": 0.03434131164719129\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.02765215314415927,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.02765215314415927\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516302,\n \"acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516302\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.04453197507374984,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.04453197507374984\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6441717791411042,\n \"acc_stderr\": 0.03761521380046734,\n \"acc_norm\": 0.6441717791411042,\n \"acc_norm_stderr\": 0.03761521380046734\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.047184714852195886,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.047184714852195886\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.045416094465039476,\n \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.045416094465039476\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8290598290598291,\n \"acc_stderr\": 0.02466249684520982,\n \"acc_norm\": 0.8290598290598291,\n \"acc_norm_stderr\": 0.02466249684520982\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7624521072796935,\n \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.7624521072796935,\n \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.02629622791561367,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.02629622791561367\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3027932960893855,\n \"acc_stderr\": 0.01536686038639711,\n \"acc_norm\": 0.3027932960893855,\n \"acc_norm_stderr\": 0.01536686038639711\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.02758281141515961,\n \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.02758281141515961\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.02698147804364804,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.02698147804364804\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6080246913580247,\n \"acc_stderr\": 0.027163686038271146,\n \"acc_norm\": 0.6080246913580247,\n \"acc_norm_stderr\": 0.027163686038271146\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.41134751773049644,\n \"acc_stderr\": 0.02935491115994098,\n \"acc_norm\": 0.41134751773049644,\n \"acc_norm_stderr\": 0.02935491115994098\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4517601043024772,\n \"acc_stderr\": 0.012710662233660247,\n \"acc_norm\": 0.4517601043024772,\n \"acc_norm_stderr\": 0.012710662233660247\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5183823529411765,\n \"acc_stderr\": 0.030352303395351964,\n \"acc_norm\": 0.5183823529411765,\n \"acc_norm_stderr\": 0.030352303395351964\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5718954248366013,\n \"acc_stderr\": 0.0200176292142131,\n \"acc_norm\": 0.5718954248366013,\n \"acc_norm_stderr\": 0.0200176292142131\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5818181818181818,\n \"acc_stderr\": 0.04724577405731572,\n \"acc_norm\": 0.5818181818181818,\n \"acc_norm_stderr\": 0.04724577405731572\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6326530612244898,\n \"acc_stderr\": 0.030862144921087548,\n \"acc_norm\": 0.6326530612244898,\n \"acc_norm_stderr\": 0.030862144921087548\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.46987951807228917,\n \"acc_stderr\": 0.03885425420866766,\n \"acc_norm\": 0.46987951807228917,\n \"acc_norm_stderr\": 0.03885425420866766\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338734,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338734\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2913096695226438,\n \"mc1_stderr\": 0.015905987048184828,\n \"mc2\": 0.42517178573631115,\n \"mc2_stderr\": 0.01461529390566251\n }\n}\n```", "repo_url": "https://huggingface.co/AA051610/T1C", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|arc:challenge|25_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hellaswag|10_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T15-21-34.954726.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T15-21-34.954726.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T15_21_34.954726", "path": ["results_2023-10-11T15-21-34.954726.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T15-21-34.954726.parquet"]}]}]}
2023-10-11T14:22:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of AA051610/T1C ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model AA051610/T1C on the Open LLM Leaderboard. The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-11T15:21:34.954726(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of AA051610/T1C", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T1C on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T15:21:34.954726(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of AA051610/T1C", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T1C on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T15:21:34.954726(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 17, 31, 165, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AA051610/T1C## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/T1C on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T15:21:34.954726(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
5d8207630dd39307ce334cc2c23e3435bc9cf24c
# Dataset Card for "freshqa_10_06" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
natyou/freshqa_10_06
[ "region:us" ]
2023-10-11T14:23:22+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "split", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "effective_year", "dtype": "string"}, {"name": "next_review", "dtype": "string"}, {"name": "false_premise", "dtype": "bool"}, {"name": "num_hops", "dtype": "string"}, {"name": "fact_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "answer_0", "dtype": "string"}, {"name": "answer_1", "dtype": "string"}, {"name": "answer_2", "dtype": "string"}, {"name": "answer_3", "dtype": "string"}, {"name": "answer_4", "dtype": "string"}, {"name": "answer_5", "dtype": "string"}, {"name": "answer_6", "dtype": "string"}, {"name": "answer_7", "dtype": "string"}, {"name": "answer_8", "dtype": "string"}, {"name": "answer_9", "dtype": "string"}, {"name": "note", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 192891, "num_examples": 500}, {"name": "dev", "num_bytes": 39203, "num_examples": 100}], "download_size": 129810, "dataset_size": 232094}}
2023-10-11T14:26:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "freshqa_10_06" More Information needed
[ "# Dataset Card for \"freshqa_10_06\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"freshqa_10_06\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"freshqa_10_06\"\n\nMore Information needed" ]
41f9e2b2a43262a5416df01114ef433081903029
# Dataset Card for "53284ebf" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/53284ebf
[ "region:us" ]
2023-10-11T14:38:41+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 191, "num_examples": 10}], "download_size": 1401, "dataset_size": 191}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-11T14:38:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "53284ebf" More Information needed
[ "# Dataset Card for \"53284ebf\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"53284ebf\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"53284ebf\"\n\nMore Information needed" ]
871b53445a98a232ccdfa596feeaf15b0cd99771
# Dataset Card for Evaluation run of ehartford/samantha-1.2-mistral-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/ehartford/samantha-1.2-mistral-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [ehartford/samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ehartford__samantha-1.2-mistral-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T06:58:18.439243](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__samantha-1.2-mistral-7b/blob/main/results_2023-10-24T06-58-18.439243.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.002726510067114094, "em_stderr": 0.0005340111700415926, "f1": 0.06134647651006727, "f1_stderr": 0.001402920930367906, "acc": 0.47757263909840575, "acc_stderr": 0.010941242547603296 }, "harness|drop|3": { "em": 0.002726510067114094, "em_stderr": 0.0005340111700415926, "f1": 0.06134647651006727, "f1_stderr": 0.001402920930367906 }, "harness|gsm8k|5": { "acc": 0.16982562547384383, "acc_stderr": 0.010342572360861202 }, "harness|winogrande|5": { "acc": 0.7853196527229677, "acc_stderr": 0.011539912734345393 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_ehartford__samantha-1.2-mistral-7b
[ "region:us" ]
2023-10-11T14:46:51+00:00
{"pretty_name": "Evaluation run of ehartford/samantha-1.2-mistral-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ehartford/samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ehartford__samantha-1.2-mistral-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T06:58:18.439243](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__samantha-1.2-mistral-7b/blob/main/results_2023-10-24T06-58-18.439243.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002726510067114094,\n \"em_stderr\": 0.0005340111700415926,\n \"f1\": 0.06134647651006727,\n \"f1_stderr\": 0.001402920930367906,\n \"acc\": 0.47757263909840575,\n \"acc_stderr\": 0.010941242547603296\n },\n \"harness|drop|3\": {\n \"em\": 0.002726510067114094,\n \"em_stderr\": 0.0005340111700415926,\n \"f1\": 0.06134647651006727,\n \"f1_stderr\": 0.001402920930367906\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16982562547384383,\n \"acc_stderr\": 0.010342572360861202\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7853196527229677,\n \"acc_stderr\": 0.011539912734345393\n }\n}\n```", "repo_url": "https://huggingface.co/ehartford/samantha-1.2-mistral-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|arc:challenge|25_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T06_58_18.439243", "path": ["**/details_harness|drop|3_2023-10-24T06-58-18.439243.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T06-58-18.439243.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T06_58_18.439243", "path": ["**/details_harness|gsm8k|5_2023-10-24T06-58-18.439243.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T06-58-18.439243.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hellaswag|10_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T15-46-28.898359.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T15-46-28.898359.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T15-46-28.898359.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T06_58_18.439243", "path": ["**/details_harness|winogrande|5_2023-10-24T06-58-18.439243.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T06-58-18.439243.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T15_46_28.898359", "path": ["results_2023-10-11T15-46-28.898359.parquet"]}, {"split": "2023_10_24T06_58_18.439243", "path": ["results_2023-10-24T06-58-18.439243.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T06-58-18.439243.parquet"]}]}]}
2023-10-24T05:58:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ehartford/samantha-1.2-mistral-7b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model ehartford/samantha-1.2-mistral-7b on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-24T06:58:18.439243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of ehartford/samantha-1.2-mistral-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ehartford/samantha-1.2-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T06:58:18.439243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ehartford/samantha-1.2-mistral-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ehartford/samantha-1.2-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-24T06:58:18.439243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 22, 31, 170, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ehartford/samantha-1.2-mistral-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ehartford/samantha-1.2-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T06:58:18.439243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
102aa7f457c2466f93e5a6953ec84a40b844e816
# chess-AI-database This is the main database for Chess AI For more detail, please see [Chess-AI-Pytorch](https://github.com/Mai0313/Chess-AI-Pytorch)
Mai0313/Chess-AI-Database
[ "region:us" ]
2023-10-11T15:06:24+00:00
{}
2023-10-11T16:53:59+00:00
[]
[]
TAGS #region-us
# chess-AI-database This is the main database for Chess AI For more detail, please see Chess-AI-Pytorch
[ "# chess-AI-database\nThis is the main database for Chess AI\nFor more detail, please see Chess-AI-Pytorch" ]
[ "TAGS\n#region-us \n", "# chess-AI-database\nThis is the main database for Chess AI\nFor more detail, please see Chess-AI-Pytorch" ]
[ 6, 35 ]
[ "passage: TAGS\n#region-us \n# chess-AI-database\nThis is the main database for Chess AI\nFor more detail, please see Chess-AI-Pytorch" ]
1b0a1826fbd8e8edd95b9e928891e9412afeb534
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench7 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Undi95/Mistral-11B-TestBench7 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench7](https://huggingface.co/Undi95/Mistral-11B-TestBench7) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench7", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-10-11T16:09:31.642289](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench7/blob/main/results_2023-10-11T16-09-31.642289.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6399052867360159, "acc_stderr": 0.03310704632621164, "acc_norm": 0.6439213227226402, "acc_norm_stderr": 0.03308447285363473, "mc1": 0.29498164014687883, "mc1_stderr": 0.015964400965589657, "mc2": 0.4691495265456508, "mc2_stderr": 0.014857248788144817 }, "harness|arc:challenge|25": { "acc": 0.590443686006826, "acc_stderr": 0.014370358632472432, "acc_norm": 0.6331058020477816, "acc_norm_stderr": 0.014084133118104298 }, "harness|hellaswag|10": { "acc": 0.63433578968333, "acc_stderr": 0.004806316342709402, "acc_norm": 0.8286197968532165, "acc_norm_stderr": 0.0037607069750393053 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6444444444444445, "acc_stderr": 0.04135176749720385, "acc_norm": 0.6444444444444445, "acc_norm_stderr": 0.04135176749720385 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6710526315789473, "acc_stderr": 0.038234289699266046, "acc_norm": 0.6710526315789473, "acc_norm_stderr": 0.038234289699266046 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6943396226415094, "acc_stderr": 0.028353298073322663, "acc_norm": 0.6943396226415094, "acc_norm_stderr": 0.028353298073322663 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7083333333333334, "acc_stderr": 0.038009680605548594, "acc_norm": 0.7083333333333334, "acc_norm_stderr": 0.038009680605548594 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.05016135580465919, "acc_norm": 0.47, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5361702127659574, "acc_stderr": 0.032600385118357715, "acc_norm": 0.5361702127659574, "acc_norm_stderr": 0.032600385118357715 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5379310344827586, "acc_stderr": 0.04154659671707548, "acc_norm": 0.5379310344827586, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41534391534391535, "acc_stderr": 0.025379524910778405, "acc_norm": 0.41534391534391535, "acc_norm_stderr": 0.025379524910778405 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42063492063492064, "acc_stderr": 0.04415438226743744, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.04415438226743744 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7806451612903226, "acc_stderr": 0.023540799358723295, "acc_norm": 0.7806451612903226, "acc_norm_stderr": 0.023540799358723295 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4975369458128079, "acc_stderr": 0.03517945038691063, "acc_norm": 0.4975369458128079, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.803030303030303, "acc_stderr": 0.02833560973246336, "acc_norm": 0.803030303030303, "acc_norm_stderr": 0.02833560973246336 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8808290155440415, "acc_stderr": 0.023381935348121434, "acc_norm": 0.8808290155440415, "acc_norm_stderr": 0.023381935348121434 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6846153846153846, "acc_stderr": 0.023559646983189946, "acc_norm": 0.6846153846153846, "acc_norm_stderr": 0.023559646983189946 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.028742040903948496, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.028742040903948496 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6596638655462185, "acc_stderr": 0.030778057422931673, "acc_norm": 0.6596638655462185, "acc_norm_stderr": 0.030778057422931673 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8293577981651377, "acc_stderr": 0.01612927102509986, "acc_norm": 0.8293577981651377, "acc_norm_stderr": 0.01612927102509986 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6064814814814815, "acc_stderr": 0.03331747876370312, "acc_norm": 0.6064814814814815, "acc_norm_stderr": 0.03331747876370312 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.803921568627451, "acc_stderr": 0.027865942286639318, "acc_norm": 0.803921568627451, "acc_norm_stderr": 0.027865942286639318 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7721518987341772, "acc_stderr": 0.02730348459906943, "acc_norm": 0.7721518987341772, "acc_norm_stderr": 0.02730348459906943 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6547085201793722, "acc_stderr": 0.03191100192835794, "acc_norm": 0.6547085201793722, "acc_norm_stderr": 0.03191100192835794 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596914, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596914 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252627, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.0335195387952127, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.0335195387952127 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.039891398595317706, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.039891398595317706 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8632478632478633, "acc_stderr": 0.02250903393707781, "acc_norm": 0.8632478632478633, "acc_norm_stderr": 0.02250903393707781 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8109833971902938, "acc_stderr": 0.014000791294407006, "acc_norm": 0.8109833971902938, "acc_norm_stderr": 0.014000791294407006 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6965317919075145, "acc_stderr": 0.024752411960917205, "acc_norm": 0.6965317919075145, "acc_norm_stderr": 0.024752411960917205 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.38100558659217876, "acc_stderr": 0.01624202883405362, "acc_norm": 0.38100558659217876, "acc_norm_stderr": 0.01624202883405362 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7320261437908496, "acc_stderr": 0.025360603796242557, "acc_norm": 0.7320261437908496, "acc_norm_stderr": 0.025360603796242557 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7041800643086816, "acc_stderr": 0.025922371788818777, "acc_norm": 0.7041800643086816, "acc_norm_stderr": 0.025922371788818777 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7191358024691358, "acc_stderr": 0.025006469755799208, "acc_norm": 0.7191358024691358, "acc_norm_stderr": 0.025006469755799208 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4367666232073012, "acc_stderr": 0.012667701919603662, "acc_norm": 0.4367666232073012, "acc_norm_stderr": 0.012667701919603662 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6911764705882353, "acc_stderr": 0.02806499816704009, "acc_norm": 0.6911764705882353, "acc_norm_stderr": 0.02806499816704009 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6601307189542484, "acc_stderr": 0.019162418588623557, "acc_norm": 0.6601307189542484, "acc_norm_stderr": 0.019162418588623557 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.02853556033712844, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.02853556033712844 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.02553843336857833, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.02553843336857833 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.034873508801977704, "acc_norm": 0.86, "acc_norm_stderr": 0.034873508801977704 }, "harness|hendrycksTest-virology|5": { "acc": 0.536144578313253, "acc_stderr": 0.038823108508905954, "acc_norm": 0.536144578313253, "acc_norm_stderr": 0.038823108508905954 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.29498164014687883, "mc1_stderr": 0.015964400965589657, "mc2": 0.4691495265456508, "mc2_stderr": 0.014857248788144817 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench7
[ "region:us" ]
2023-10-11T15:09:54+00:00
{"pretty_name": "Evaluation run of Undi95/Mistral-11B-TestBench7", "dataset_summary": "Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench7](https://huggingface.co/Undi95/Mistral-11B-TestBench7) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench7\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T16:09:31.642289](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench7/blob/main/results_2023-10-11T16-09-31.642289.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6399052867360159,\n \"acc_stderr\": 0.03310704632621164,\n \"acc_norm\": 0.6439213227226402,\n \"acc_norm_stderr\": 0.03308447285363473,\n \"mc1\": 0.29498164014687883,\n \"mc1_stderr\": 0.015964400965589657,\n \"mc2\": 0.4691495265456508,\n \"mc2_stderr\": 0.014857248788144817\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.590443686006826,\n \"acc_stderr\": 0.014370358632472432,\n \"acc_norm\": 0.6331058020477816,\n \"acc_norm_stderr\": 0.014084133118104298\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.63433578968333,\n \"acc_stderr\": 0.004806316342709402,\n \"acc_norm\": 0.8286197968532165,\n \"acc_norm_stderr\": 0.0037607069750393053\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5361702127659574,\n \"acc_stderr\": 0.032600385118357715,\n \"acc_norm\": 0.5361702127659574,\n \"acc_norm_stderr\": 0.032600385118357715\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41534391534391535,\n \"acc_stderr\": 0.025379524910778405,\n \"acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.025379524910778405\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.02833560973246336,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.02833560973246336\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.023381935348121434,\n \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.023381935348121434\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6846153846153846,\n \"acc_stderr\": 0.023559646983189946,\n \"acc_norm\": 0.6846153846153846,\n \"acc_norm_stderr\": 0.023559646983189946\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948496,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948496\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8293577981651377,\n \"acc_stderr\": 0.01612927102509986,\n \"acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.01612927102509986\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7721518987341772,\n \"acc_stderr\": 0.02730348459906943,\n \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.02730348459906943\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6547085201793722,\n \"acc_stderr\": 0.03191100192835794,\n \"acc_norm\": 0.6547085201793722,\n \"acc_norm_stderr\": 0.03191100192835794\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596914,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596914\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.02250903393707781,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.02250903393707781\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n \"acc_stderr\": 0.014000791294407006,\n \"acc_norm\": 0.8109833971902938,\n \"acc_norm_stderr\": 0.014000791294407006\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6965317919075145,\n \"acc_stderr\": 0.024752411960917205,\n \"acc_norm\": 0.6965317919075145,\n \"acc_norm_stderr\": 0.024752411960917205\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38100558659217876,\n \"acc_stderr\": 0.01624202883405362,\n \"acc_norm\": 0.38100558659217876,\n \"acc_norm_stderr\": 0.01624202883405362\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n \"acc_stderr\": 0.025922371788818777,\n \"acc_norm\": 0.7041800643086816,\n \"acc_norm_stderr\": 0.025922371788818777\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7191358024691358,\n \"acc_stderr\": 0.025006469755799208,\n \"acc_norm\": 0.7191358024691358,\n \"acc_norm_stderr\": 0.025006469755799208\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4367666232073012,\n \"acc_stderr\": 0.012667701919603662,\n \"acc_norm\": 0.4367666232073012,\n \"acc_norm_stderr\": 0.012667701919603662\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.019162418588623557,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.019162418588623557\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.02853556033712844,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.02853556033712844\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.02553843336857833,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.02553843336857833\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29498164014687883,\n \"mc1_stderr\": 0.015964400965589657,\n \"mc2\": 0.4691495265456508,\n \"mc2_stderr\": 0.014857248788144817\n }\n}\n```", "repo_url": "https://huggingface.co/Undi95/Mistral-11B-TestBench7", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|arc:challenge|25_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hellaswag|10_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T16-09-31.642289.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T16-09-31.642289.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T16_09_31.642289", "path": ["results_2023-10-11T16-09-31.642289.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T16-09-31.642289.parquet"]}]}]}
2023-10-11T15:10:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench7 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench7 on the Open LLM Leaderboard. The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-11T16:09:31.642289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench7", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T16:09:31.642289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench7", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-11T16:09:31.642289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 22, 31, 170, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench7## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T16:09:31.642289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
e9ca6fc3adeaf5936131c338805844c51f9b7ad4
# Dataset Card for "isiafoodcap_all" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
advancedcv/isiafoodcap_all
[ "region:us" ]
2023-10-11T15:11:14+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "category", "dtype": "int64"}, {"name": "cat", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58102864554.243, "num_examples": 313023}, {"name": "test", "num_bytes": 6861922883.024, "num_examples": 86703}], "download_size": 9243094517, "dataset_size": 64964787437.267}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-10-12T10:12:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "isiafoodcap_all" More Information needed
[ "# Dataset Card for \"isiafoodcap_all\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"isiafoodcap_all\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"isiafoodcap_all\"\n\nMore Information needed" ]
202f25470392a378fea9cce2a28f2ff1cf2085ef
# Dataset Card for "Cosmic_dataset_V3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
slaqrichi/Cosmic_dataset_V3
[ "region:us" ]
2023-10-11T15:12:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "ID", "dtype": "int64"}, {"name": "requirement", "dtype": "string"}, {"name": "functional process", "dtype": "string"}, {"name": "functional user", "dtype": "string"}, {"name": "sub processes", "dtype": "string"}, {"name": "data groups", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22158.157894736843, "num_examples": 17}, {"name": "test", "num_bytes": 1303.421052631579, "num_examples": 1}, {"name": "valid", "num_bytes": 1303.421052631579, "num_examples": 1}], "download_size": 43297, "dataset_size": 24765.000000000004}}
2023-10-11T15:12:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Cosmic_dataset_V3" More Information needed
[ "# Dataset Card for \"Cosmic_dataset_V3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Cosmic_dataset_V3\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Cosmic_dataset_V3\"\n\nMore Information needed" ]